In other cases, the source of the slowdown is less clear. Is a physical device just not delivering its signals any sooner?
I’ve played games where you have to walk to a very precise spot, hit a button, and wait literally one whole second before ANY response is visible onscreen or in audio. (And if it turns out you didn’t really take the action you thought you did, you have to walk in circles to try a slightly different spot, and wait again). Why should that ever be the case? How can a super-fast console not immediately display something or play some sound to show that you took the action?
But, most of what you are talking about is software latency. At 1/30th of a second each, software pipelining systems seem cheap individually but pile up very quickly. Hit a button, read the button, react in AI, react in animation, react in physics, react in graphics, process in the GPU, process in the display device. These can easily add up to 5/30ths of a second with poorly planned software. In the middle of all that, the animation and audio has aesthetic requirements for smooth transitions that can insert a 1/2 second lag in the middle of that process. Now we're up to 20/30ths.
Regarding animations: I've been in convos with managers requesting character animations to be "Smoother, but more poppy!" because of the conflicting needs of aesthetics and control latency. The best compromise I've found is to design a smooth transition, but have the underlying representation pop and the visuals skip immediately to mid-way through the animation.
The publisher wanted both simultaneously. They wanted the human player character to instantly change direction in response to controls. But, they also wanted the character to move like a semi-realistic human who has momentum and takes a while to change directions instead of like a sprite that instantly changes direction. :/
For a contrast of what happens when there's no squash and stretch in animation, take a look at pretty much everything ever made by Hanna-Barbera before 1990. Everything remains almost pathologically on-model all the time to reduce animation costs.
People will act that way. Some things will compress. Certainly not all things, though. So, is curious if it is always seen as better.
Windows that are flimsy are just annoying to me, which is why I would find the view that they are quality curious.
More realistic animation is typically described as more realistic. Not "popping and snappy."
For instance, in a lot of cases, a human cannot reasonably observe a particular type of change on every frame so you can skip frames. What I mean is, suppose you have tasks A, B, C and D to perform “each frame”: you might be able to perform tasks A and B on odd-numbered frames and C and D on even-numbered frames, with the user no wiser, as long as the result seems fine.
Another technique is to prioritize the start and finish but not in-between. Often, intermediate frames are relatively crappy from a “niceness” or even correctness standpoint, and nobody really notices because the frames go by quickly. As long as the end frame looks as nice as possible and everything is in exactly the right place, you can get away with a lot of short-cuts for the steps taken to get there.
The problem with techniques like these is that it's almost impossible to fully generalize them (e.g. in the case of intermediate frames, if some of them are really wrong then you get sudden clipping or jumpiness).
So if your 'fast' technique only works for a certain set of parameters, then you have just introduced an implicit dependency into your system: things are fast enough while the app looks like X, but go a bit beyond that and it suddenly breaks.
Ideally, a game would sample input multiple times per frame, go with wide parallelism for every step (very difficult for graphics until DX12/Vulkan came along), start some GPU work before physics is completed, render in less than 1/60th of a second and the users would enable no-processing "game mode" on their lag-optimized TVs. But, that's all not common practice.
You can solve any problem by adding more software and layers of abstraction, except the problem of too much software. That's the state we're in now.
The BBC micro could have a word processor in ROM that would boot almost instantly and responded to keypresses immediately. This was because the software was written in assembler and had to fit in a small ROM. The choice of using a TV system running (say) Android and a web browser means that, although the software is slightly easier to write and the processor is 100 times faster, it has to execute 10,000 times more machine instructions in order to render the UI.
This is partly why people like Maciej campaign against multi-megabyte text pages. Another way is possible.
Over those 5 years, the software requirements slowly crept upward while the hardware performance stayed the same and couldn't be changed.
That's why test-driven design is valuable -- you iterate while testing.
There's no competition in these systems because people choose the car and get lumbered with the UI on the console. Kinda like if houses had unique electrical systems and you couldn't change the white-goods.
I think the CarPlay/MirrorLink/Android Auto thing is probably a better model though. Make the console dumb and let me connect my upgraded-every-year phone that's far more powerful.
Some newer systems are based on QT or Android. These typically have better performance, because the underlying frameworks have at least a decent design.
We would expect reviews to point out if a feature such as lane assist fails or has noteworthy failures (such as rapid weaving inside the lane) but maybe not so much if it works properly.
That's a reason why I don't believe in seeing safe autonomous cars during the next few years at all. But maybe Tesla is that much better - haven't ridden one.
It's also highly hackable. =)
It was designed by Johnson Controls (JCI) but the IVI group was recently sold to Visteon, which probably explains the sudden lack of momentum from Mazda on new features (like, cough, Carplay...which was announced 2 years ago and never showed up).
Most of the people hacking on the unit hang out at mazda3revolution.com. Here's a page indexing their work so far:
I don't have the navigation enabled, so I can't speak to how responsive the map display it.
I can't speak for the quality of Mercedes interface (and this is obviously marketing for non-programmers) but LOC seems like an odd thing to be emphasising.
rm -rf /
KLOC are something you spend, a debt you accumulate.
Really makes me wish the whole thing was more hackable.
Car is say 100k, how much of the componentry is the cost of the wiring harness, nervous system and screen?
And the lag when hitting a tough screen button, incredibly frustrating. I want my buttons!
If an organization creates and utilizes software as part of their ongoing concern...is at some level, a software company.
They're not experts in much, but contracting for quality parts IS something they're supposed to be experts in.
Oooooh, don't get me started on DVD menu screen navigation. What shambling, drooling idiot decided that it was critical for me to watch an unskippable spoiler-rich montage of scenes from the entire length of the movie before I can click "Play Film", followed by another unskippable montage afterward? Insanity.
I have a Siemens washing machine, and the interface have a latency of >500 ms. How they fucked it up is way beyond me. It consists of nothing more than a rotary switch, four buttons and three 7-segment LED displays.
I've played with the thought of disassembling the firmware just to see how they fucked this up this bad. I could never make something this unresponsive even if I tried.
It's utterly fascinating and pisses me of every time I do the laundry.
For one, detergent comes in at least three forms: Powder, liquid and those little plastic pouches. Powder would be pretty easy (but the dosage would be brand-specific) and liquid would be messy (flow rate would be a challenge).
The easiest way would be if all machines could accept a "standard pellet" which gets loaded in some kind of completely fool-proof way so the machine cannot mechanically choke on them, ever, or accidentally add too few/many to a load.
Same thing with dishwashers.
As for why you can't just close the door and walk away: Setting the program is an important step in washing clothes. Modern machines do have a single "start" button.
Surely this kind of reliability is a good trade-off Vs having to pour some detergent for each wash?
Key-press response time can frequently be more than 30 seconds, depending on what the action is. Of course, you might say, that is because of bluray bloat on more recent disks, but I can assure you that its been that way from the day I purchased it. Sure some disks were better than others, but the multi minute boot-up, disk load times, player menu popup times, etc have been there since the beginning. I used to use it as a demo against my HD DVD player of why bluray wasn't ready for primtime ,and it was a 3rd generation bluray player.
There's a whole lot of Phillips and Sony players that are based off some ancient Mediatek SDK.
Edit: Wow, that's old: Sigmatek, not Mediatek. A makefile in the GPL source suggests there's a similar Pioneer player somewhere, too.
I love animations when they make the UI more understandable. I can't stand them when they are more than a couple hundred milliseconds though. I don't even think "synchronous" when I think of animations. That sounds terrible. If they are quick animations it doesn't seem as big of a deal as the long running ones though.
To UI designers:
Have some consideration for the f user!
So in the presence of this delay "partition," you have three choices, really, and the choice you make depends on the application.
A) You can choose to be available and responsive. Show the user feedback immediately and never concern yourself with global state. Technically, I'd call this an illegal choice because you must have some sort of state to even be executing code. Unless you simply don't write the code, in which case your job is easy!
B) You can choose to be immediately available and eventually consistent. You calculate the response quickly with the assumptions you have most available (local memory, disk), all while transmitting events and waiting for the further-away less-available state to become available.
This is the way many online games that need quick feedback to be fun are done.  Unfortunately, this is also the source of the lag jumps that you see. You're always running with [partition-size in ms] outdated global state, so the assumptions you made when calculating outcomes are going to be incorrect. This is why your headshot might turn into a total miss when the player jumps five feet to his right and, oh yeah, you also died.
3) Don't react to events until the global state has been updated.
This means a full round-trip plus processing remotely and locally before that click event performs the action it is supposed to. This can be anything from a crappy experience (I shot into the ground, why should I wait), all the way to the only sensible choice (if integrity is highly important, say in transactions and avoiding double-spend).
Really, it's so much more than this too. On top of availability vs consistency you have to account for some trust model (the game client says it was a headshot, but how do I know I can trust the client) and information security (confidentiality, availability, integrity).
So TL;DR there are lots of very hard problems in distributed systems and sometimes people just default to one stance or the other to balance their cognitive load or for any number of reasons (ranging from legit to ridiculous). Sometimes they default to consistency. That's probably the case for your button-click example.
I see it as 2 different questions, "Did the computer hear me" and "Does the computer have a response for me.". Most people only make an effort to answer the latter for the user, it indirectly answers the first question anyways. But you can easily answer the first by quickly doing some sort of update, such as a progress indicator, animating a button staying depressed, etc. You don't have to mess with your real model until you get a response (or an error), but then you also don't leave the user confused for 12 seconds while your app loads search results or whatever.
Obviously it doesn't work in every situation. In most video games you want your actions to affect the gameworld immediately, even if the server doesn't know about it yet. However, for most applications just adding fast indicators that the client is aware of your actions (and staying off the UI thread) will make it feel more responsive.
e-commerce findings from a major retailer are, alas, not applicable to every domain
Huge, crazy, insane amounts of time are WASTED by humans dickering around with interfaces that they don't understand and that are not personally optimized.
One of the things I don't hear many people talk about, but I am particularly interested in, is the coming and continued improvement of adaptive & personal interface design.
A challenge that any single interface has is that it's difficult to set-up and qualify a test on a small cohort group level (men over 70 years old that wear glasses are homeowners, drink wine, and live in California is an actual target class we can easily devise from current ad tech for instance).
It's challenging because - NOT ENOUGH DATA - eg very hard to run experiments and achieve statistical significance, let alone biforcate your alreadly limited resources to drive to that level of granularity.
But imagine an adaptive UX or set of UX preferences.
EG - Take the same inputs -> eye tracking / natural language feedback (speech!) / interface observation / time to goal / etc <- and then let a flush ML / AI come up with a set of experiments and pathway.
Key to not completely confuse and blow users off path will be some kind of throttling mechanism - adaptations that settle you into the UX like your body's settling into the couch cushions.
The problem with many interfaces, especially on consumer products, is that they're not discoverable, and oftentimes hide things behind inane levels of menu. Interface isn't a competitive advantage (although it should be!) so manufacturers don't invest in it.
I have a Logitech Harmony 700 (a very mainstream universal remote), I don't care for it but it's the best I could find, because I use a receiver and Apple TV. Whenever I have guests it's always a mystery for them how to use it.
I can only imagine how difficult support from friends and family would become.
"Click on the widget" "I don't see the widget" "I'm on the same page and I see the widget" "Oh, I have to click 'Show all" to see the widget"
I like that... but that's going to take a lot of work to keep it from becoming the contemporary equivalent of "microsoft clippy" but from Hell.
Has anyone pursued or published about such an approach yet?
Last time it was so bad that I stopped using Netflix on my computer until the testing stopped.
That said, the "normal" Netflix player is great and I've always been impressed with its performance and responsiveness for such a large application handling video streaming.
The feedback is generally implicit in that if it causes problems for a lot of people then the test fails because you stop watching Netflix with that device.
That being said, if you call in to customer service they can remove you from the test if it turns out that's actually the problem (sometimes the problem is that you are in two conflicting tests for example) and they do mark that down as feedback. But they want you to call in so they can better understand and record the failure mode.
You don't even have to call, you can do it via chat, if you're not the kind of person who like to talk on a phone. :)
The show is pretty awesome too.
Man their content is good, but the app sucks.
HBO is missing the most basic UX features like discoverability, recommending the next episode of shows you've been watching, ratings, etc. Great content, but basically no effort in their apps that consume their content.
Netflix is a tech company that got into content, and HBO is a content company that got into tech.
The only competitor I can compare it to is Amazon. In that regard I agree that Netflix obviously puts more into the interface. Amazon's splitting shows into seasons and not even grouping them together is maddening.
I was watching Goliath last weekend on Amazon Video via the Roku player, and 6 minutes prior to the ending of an episode, they dim the screen to start a countdown of playing the next episode. While Netflix has had some hiccups in cutting off some cold opens of some shows, it's vastly better and consistent in behaviour.
Using it with cljs-time and storing times in the app state is a really easy way to shoot yourself in the foot perf-wise since that's based on closure's date object and two equal date objects are not identical. This fails the fast identity check but will pass the structural equality check so no vdom gets generated but the check is not that much cheaper than a diff.
This is also a B2B app where we have no demand for mobile use so I haven't had to do load time or mobile perf optimization. If I did, the first thing I'd be concerned about is the bundle size since the cljs runtime plus React represents a fairly sizeable amount of overhead.
Not to say that reagent+re-frame is bad, it's just not so amazing you don't have to care about perf. I think Reagent would run particularly well on top of Inferno since the library provides lifecycle events to function components and I did experiments back in April and June on Inferno with persistent datastructures to good results. I just don't want to maintain it.
Its also easy to optimize if you integrate day8/re-frame-tracer. I've lowered the number of views touched by updates quite a bit using it. I barely put any pressure on React anymore.
The bundle size usually isn't that bad with full optimizations enabled. An empty cljs project strips the entire cljs runtime for one, minus one defonce. You can also pass a compilation constant to react to strip its debugging features; that's saving you a few dozen kilobytes as well.
I'm currently building a small app to display Twitch chat as a personal side project and it handles GamesDoneQuick's chat effortlessly. Performance has been great so far.
The TIOBE index  fwiw (please debate) suggests by going from JS to Clojure you'd be switching from the 7th most popular language to the 47th.
In that 'top 47' there are only 3 lispy langs present, 'Clojure', 'Lisp' and 'Schema'. That suggests programming in lisps is a very unpopular idea.
The Cognitect website lists quite a few success stories using Clojure. I would trust the names in that list far more than TIOBE.
But really what sold me on the language was the quality of the libraries, the incredibly helpful community and its shared focus on simplicity. Reading about a thing is no substitute for hands-on experience; its hard to judge the trade-offs you're making without it.
Should the Inferno community start porting over stuff to their ecosystem it'll be easier to sell to clients.
And that working w/ DOM/CSS would make it easier for team's designers to be more engaged.
I believe Jafar gave a talk at one of the React Confs about it.
Clojure's reagent and re-frame remove most of the complexity and tools from the equation. You run the same (mostly) code on the backend and frontend. This is what I meant by quite easy :)
Everything easily wastes 90% of the CPU resources it touches and the task manager is completely oblivious to it, happily reporting high usage. When you have 20+ tabs open and 10+ apps all those "its fast enough" apps combine to create their own variant of hell.
And that isn't even a big workload. Its no wonders computers have increased many orders of magnitude in performance over the last decade, yet user experiences are still generally mediocre.
Your servers now have to serve these API endpoints; static pages can be deferred to proper CDNs. For larger deployments this can drive the server costs to eat your profits rather quickly. Not even considering the fact that the dynamic route was much more development than the static one in the first place.
About the 90% wasted CPU, I was talking about how the CPU constantly waits for memory because very, very few programmers optimize for cache misses and lots of dynamic languages make it impossible to. Waiting on memory still shows as activity in the task manager, but the CPU isn't computing anything.
What possible reason(s) would they have for doing this? Doesn't initiating playback of a title cost them money for bandwidth and/or license fees? It's just plain infuriating, in a first-world-problem kind of way.
Talking to their support Netflix says it is FF's problem. From what I can tell it isn't actually FF's problem anymore (because this wouldn't work, right?).
The Netflix experience on a Windows HTPC is quite frankly, abysmal (whether it's the Windows Store app or through the web browser).
Panasonic, LG etc..
Whilst the Netflix UI is great compared to a huge number of apps, I do find it has actually deteriorated over the past couple of years, in terms of interface speed.
Nowadays, when I launch the app on my 2013 Panasonic TV, it stalls at the profile selection screen for about 5 seconds, and once again once a profile is loading.
It never used to do this, and I presume it has a lot to do with precaching data; it is highly annoying as it consumes keypresses during this stall, meaning you can quite accidentally start watching something you never intended to.
I know some people without a TV but that's because they choose not to consume mass media, we've done it in the past.
Where do you live? Presumably your circle still consume mainstream media but do so via laptop/desktop/handhelds?
Just put the css/sass on .css/.sass files and import the classes from the react jsx file. No need to mix css and html.
If I understood it correctly, one can think of react app/page as a pure function taking data and producing a DOM.
Reasoning and testing pure functions should be much easier than a set of components which partly rely on side effects.
(EDIT: Help - how do you post 2 links on adjacent lines without HN breaking the formatting and sticking them on the same line?)
JSX is convenient but all React really requires is putting all of the logic to render a component under one method. You could carefully construct a piece of XML or something like that in render and then pass it to a templating engine of your choice°, instead of any JSX.
But JSX is as nice a language as any for templates, and side effects in render are clearly wrong in React so the old issues of turing compete templating are ameliorated.
° You'd have to do a tiny bit of wiring to make sub-components go back into the react pipeline.
Going back quite a few years now, I once worked on a GWT app that kept its state in a central store, and updated it via firing events on an event bus. In practice, it worked somewhat like Redux, but instead of mapping state to props, in each component you'd subscribe to the update event and then update the relevant bits of the component in the event handler. Being Java, it was easy to find all references to the update event to get a quick understanding of exactly what it was being used.
Come to think of it, it would be easy enough to use React that way too, especially if using TypeScript and maybe something like RxJS.
> it can be difficult to see at a glance every place in an app that a single bit of state is being used
What problems has this caused you? Personally, I find being oblivious to what specifically needs to update is useful but I'm interested in situations where that's not the case.
This is why a UI app state query-based approaches (ex: om next) on a single app state tend to be a bit easier, but still can suffer from what you describe (especially if queries are dynamically built by a user or something). At a minimum, you can either re-use the same query or at least know through the query syntax where your code is touch some bit of state as it should be available for analysis by an IDE and easily searchable as plain text. Still, there's not really a perfect solution I've seen and eventually things can get messy if you aren't careful and as you increase the number of developers touching code.
In general, updating a UI only based on changes is a great approach. The challenge has always been identifying what changed and minimizing the tracking of those changes in scalable way. I saw many old approaches use things like dirty flags everywhere or doing field comparisons one by one. Things like react frameworks in Clojurescript make this so much easier today because you can do a very fast identity check that is cheaper using immutable structures like those in Clojure. If your check itself is expensive, the benefits of a delta-based approach are limited vs. giving up and doing full re-renders. I've hit this in game programming either way, but usually the change approach wins unless there's some very specialized case or design issue or simply just tons of raw power it's not an issue anyway.
Where events themselves suck is predictability. This is doubly so for systems that introduce event hierarchies i.e. inheritance-like constructs for events. I strongly prefer deterministic approaches to rendering when possible. That's not to say you re-render at a fixed interval, but rather attempt to re-render changes only if they exist. It makes debugging, optimizing, and understanding the system so much easier.
From a performance and debugging point of view, events or signals and slots tend to cause situations where you're jumping so much over the code you lose all kinds of CPU cache or GPU benefits (buffers, batching, pipelining, etc.) depending on what you are doing. Also some event systems use a lot of objects and in systems requiring allocations on the heap and/or garbage collection, this can become really ugly if the app is running for awhile. Events do make things easy for small projects, but tend to create spaghetti for larger ones, even with a single event bus. Approaches using loops, deltas and possibly queues/mailboxes tend to scale a lot better for games, and also for apps that have performance issues. A side benefit is your state tends to become a functional reduction, which has its own benefits such as making undo/redo, logging, and error handling easier.
Sometimes I'm rather confused by all the UI issues created in app dev. I understand them, but as someone who has many decades of experience doing both game and app development, I'm like wtf app programmers. React and things like it are or can be at least a bit closer to loop, pipelined, time approach that's used in modern game architectures precisely to achieve consistent performance and do sane things in the face of GPUs.