Hacker News new | past | comments | ask | show | jobs | submit login
The Elements of UI Engineering (overreacted.io)
468 points by danabramov on Dec 31, 2018 | hide | past | favorite | 79 comments



I've been building consumer apps professionally since 2013 on iOS, Android, Windows Phone and the web. I also spent a few years designing and implementing UI frameworks. I've had to deal with all of the problems listed in Dan's article (with and without frameworks).

Over the years, I have grown convinced that designing and implementing UI by hand simply doesn't scale. There are too many things to consider, too many different users, too many preferences, too many possible states. Responsive design isn't just about screen sizes anymore, it's also about the user's language, culture, disabilities, input, context, preferences, connectivity, knowledge, focus, etc.

We can't expect every restaurant, bank, festival and airlines to implement their own apps, and consider all of the above. You won't find a dark theme in the Domino's app. Why do we tolerate these compromises in the name of branding? Why should UIs be tightly coupled with the services and data? Why don't we have general purpose clients?

I think the job of service providers should be to semantically annotate their data, so that a general purpose client can dynamically render it. All of these UI concerns would only have to be dealt with once, and we all would be able to sleep at night. Just let business people do business, and let UI/UX people do design.


How does that scale? It's more than just 'branding' or pretty colours. What about actual featureS? There's the standard say there's the standard 'airline' app, that handles booking flights and then the lifetime of managing that booking. Reserving a seat and so on. Airlines build the data and the APIs, OS vendors (?) build the frontends.

Then Singapore Air or whoever comes out with a new product/service - you can book time in an onboard shower. The 'standard' doesn't allow for ancillary service bookies, so how does Singapore Air get this to their customers? Go through the 'standards body' and wait for that, rather than just building it into their own app that they control end-to-end.

In reality, there are too many variations between different companies offering the same service, that it would be rather infeasible to have a single contract they all follow (without compromise) to develop these provider-agnostic client apps.

In a way, browsers and native mobile SDKs is the solution to that problem - browser/OS vendors have created clients for common APIs and widget toolkits to create interfaces. The 'hard work' in delivery an application has already been done (no need for the bank to worry about HTTP, or building widgets like text fields), leaving them to delivery 'just' the layer on top of that.


You create a standard that covers 90% of most used cases and an nice extension api that allows crazy customization of the last 10%


Ahh cool, so like a web browser?


A related issue is that the means of making web UIs are too low-level. There's no standard ready-made widget kit like on desktops.

HTML and CSS started as a solution for publishing text-based content, like the olde magazines but on the web. For that they are splendid: text rendering and basic input handling are taken care of, and wide variety of output devices were supported since day one (HTML 2.0 without tables works excellent on phones).

Then web 2.0 and web apps happened, web idioms began to change, all of people's activity with computers started moving to the web. And it turned out that to create complex UIs for all of that you have to fiddle with rather low-level primitives and handle interactions between them, because outside of text layout HTML primarily knows about divs and a handful of input widgets.

This may be good because high-level widgets will be developed independently and will evolve faster (and maybe better) than if they were built into browsers. But in the meantime we have to live with the chaos that we have.


>This may be good because high-level widgets will be developed independently and will evolve faster (and maybe better) than if they were built into browsers. But in the meantime we have to live with the chaos that we have.

I don't think that having some good built-in widgets (like a better dropdown, menus, DataGrid) would prevent third parties creating their own custom versions but it would help 99% of users that need the basic stuff (like a dropdown that can have icons)


(I need to add, though, that even if a compete and perfect-in-every-respect third-party widget kit for the web appears today, it will face a huge problem of adoption, or rather lack of it. In this regard, fragmentation on the web is unbelievable by desktop standards―also a consequence of the "web is for publishing" approach.)


Vanity is the reason, in my experience.

For all the downsides of amp, it’s demonstrated that letting go of asthetic vanity in the name of fundamental soundness is a good trade to make.


What you want is gopher[1]. Instead of AMP I think sites could honor requests with an text/plain header, the issue is monetization. But AMP is for content based sites, SPA capture a completely different use cases.

[1] https://en.m.wikipedia.org/wiki/Gopher_(protocol)


> I think the job of service providers should be to semantically annotate their data, so that a general purpose client can dynamically render it.

You mean like HTML? We tried that already and people disliked the lack of branding ability.


> people disliked the lack of branding ability.

What people? Brands or users?


I would argue both. Systems like Bootstrap still get a lot of flak


I've always thought AI would be a good solution for this. The ideal way to create a website is to just express what you see in your mind through drawings, essentially, with a simple way to represent transitions. Doing this in real time with a compiler like in Dreamweaver is awkward because there are too many nonsensical constraints (and other problems that real UI engineers can tell you, I'm just an AI guy).

So to bridge the gap you really need a function that can translate your expression into reality by understanding what parts are important for your design to work and what isn't, and making it work on any platform/client/medium. The type of function that can approximate a potentially extremely complex reality, that can be trained incrementally given examples (bugs)... Sounds a lot like modern machine learning to me.


One UI to rule them all? This would be a really interesting tool for web developers.

Thankfully, modern UI development is blessed with some really professional UI toolkits: elements UI for vue or Semantic React for instance do a great job of making us think in terms of generic interactions instead of reinventing the wheel.

This means we get to focus on improving the existing as the project warrants which is significantly more rewarding.


I would prefer powerful UI widgets built in and if your preferred framework wants it can wrap this widgets or if not it can just use nested divs and css to create an alternative.


I've been building UI for years now and all these points highlighted by Dan are spot on. Great post!

I've been thinking lately about the "design system" trend that is becoming more and more popular where companies want more control over styling and behaviour to make their UX unique to their brand as well as consistent across all their products. Ready-made 3rd party components like the excellent react-select don't really fit into this world as general purpose components like this naturally have to make some choices regarding styling and behaviour. No matter how customisable they are, in the end they rarely integrate well into a UI based on a design system.

This makes me feel like the abstraction is all wrong. Rather than aiming for fully-functional, out-of-the-box components that cater for all manner of general purpose requirements, how about a library/framework that focuses on a set of primitive components that deal with the lower level concerns like layout, scrolling, positioning, etc. Maybe the abstraction could be more like "composable shapes" than "ready-made components" or something along those lines.

With this approach, you wouldn't ever start out with something like a ready-made Autocomplete component, for example. Instead, you would always build a custom Autocomplete and have complete control over styling and behaviour, but it would be built from solid foundations using some form of the "shape" abstraction. That way, you can focus on making the component's styling and behaviour consistent with the design system without having to worry so much about layout, accessibility, scrolling, positioning, etc - as all of these are taken care of by the framework.


That's exactly how some of the newer libraries in React ecosystem work btw. For example, Downshift is that "DYI Autocomplete" — it handles a11y and mechanics but you can compose any kind of behavior and styling out of its primitives.

https://github.com/paypal/downshift

I'm glad to see this trend.


> a11y

I had to look that one up. I find it amusingly ironic that an accessibility initiative uses an abbreviation that renders the name incomprehensible.


It's certainly ironic, but most places I've been to pronounces it like the word "ally". At least it's easier than the even more obscure "i18n" and "l10n" for "internationalization" and "localization".


I've used Downshift before and while I like the idea, what I am suggesting is broader in scope. A set of primitives such as Overlay, Rectangle, Circle, List, Row, Column, etc that still allow you to apply styling but overall they sit at a lower level of abstraction than ready-made components, even ones based on something like Downshift.

The more I think about this the more I feel like I'm describing exactly how "Qt Quick"[1] works, the declarative user interface markup language that is part of the Qt Framework. Of course, it's not a web technology but it would be interesting to see a web-based framework/lib based around some (if not all) of the ideas found in QML.

[1] http://doc.qt.io/qt-5/qml-tutorial1.html


It sounds like you're describing flutter. But that's not quite a web technology either.


This is why I always liked Cocoa (macOS native UI toolkit) and still prefer it over web UI when writing native apps. It separates out the concepts of layout, appearance, and behavior, and lets you customize any one of them without needing to readjust the others. I just released an app this morning here that uses a non-trivial amount of Cocoa, and it would be full Cocoa if there was a Cocoa-version of the excellent Monaco editor. I feel like there should be a web version of this sort of "old-fashioned" toolkit that has a timeless ease of use once you get over the learning curve.


I always blamed it on the fact we stopped adding HTML tags too early.

There were a lot of UI elements that were obviously needed if you were going to use a browser as an interactive app platform, but were easily passed over when minimal forms were considered sufficient.

I could see, for examples, a consistent WYSIWYG edit-box, a dropdown menu construct, a select-with-manual-override, consistent date and time widgets. Instead we got a bunch of inappropriate elements glued together with CSS and JavaScript to sort of work, but in unpredictable, non-native ways.


That's a great point and something I've seen dealing with multiple look and feels at a single company. While not quite as far as what you suggest, we've had good success building components with no custom CSS at all and instead use Bootstrap classes exclusively. This allows a lot of freedom for the design system to look at it needs, but encapsulates the "hard parts" as you list. We've struggled with react-select as well, but were able to mostly specify various props to use Bootstrap classes. Not a sexy approach, but it works.


Headless react components handle logic and state management and then pass data and handlers to their child components so whoever consumes them can determine how they look. Typically that pattern is called render props in react. Before render props the goto for this was higher order components which would wrap a component you style and make.


Totally agreed.

A modal, for instance, cannot be an independent isolated component. Because at the very least it needs to display an overlay over the entire page, which means that the modal trigger must have some way to communicate with an element at the root of the document. Those are architectural decisions, not UI ones.

And that's just one example.


Navigation

Whenever we can't fit everything on the screen at once, we use some of the patterns below:

- scroll view

- virtualized list

- tabs

- drawer

- master-detail

- page navigation

- modal navigation

- alert

- tooltip

- combo box

- collapsible

- carousel

- gallery

It wouldn't make sense to use any of these patterns if we had ∞ sized displays.

Would you call all of the above patterns "navigation"? Why not? Isn't navigation just a way to reach content that isn't immediately accessible? Why don't you think of scrolling through a list as some sort of navigation? I think you should.

It really helps to re-frame all of the above patterns as simple layout strategies. Layouting is about putting content where it belongs. Whether that content is visible or not (covered, collapsed, out of bounds) doesn't really matter.

Let's consider the classic master-detail example that so many people struggle with:

  Tablet (stacked on X axis)

  +-----+---------+
  |     |         |
  |  M  |    D    |
  |     |         |
  +-----+---------+

  --------X--------

  Phone (stacked on Z axis)

       +-----+       
  +-----+    |      /
  |     | M  |     /
  |  D  |    |    Z
  |     |----+   /
  +-----+       /
The only difference between these two examples is the stacking axis. That's it. It's the only thing that should change when resizing a window. You don't need to recreate a completely new layout using frames and pages and what not. Re-framing the problem just makes everything much easier. Navigation is just layouting.


This is an interesting idea. And like very often things that may seem like counterexamples on the surface are actually open doors to something completely different ways of doing things.

Like:

-Normal (stacked / Z) navigation is still special since it happens always on a full-screen scale.

-While you can stack your own UI the way you want, you still have to be able to really navigate to another page or app.

What about if you could navigate on widget scale? Like every web widget would have a different URL? Instead of having a single monolithic view we would have more independent views.

Or what about if we render external links as regular content? Should <a> and <iframe> be just a single element and just give different CSS render hints. If we can compose final user experience from multiple applications what kind of "higher order applications" we can do and what this means for application development?


What do you make of route transition animations?


There's no reason for route transition to be a special case of state transition.

Here are some examples of state transitions that can be augmented with animations:

- reorder items in a lost

- add/remove an item from a list

- expand/collapse an element

- hover/press/disable a button

- show a popup

- show an inline error message

- open a drop down menu

- open/close a burger menu

- change the burger menu button to a back button

- change the scroll offset to a new item/anchor

- increase the height of a text field

- show/hide the top menu/navigation bar

- add an item to the cart (and the count badge appears/increases)

- increase the value of a progress bar

- image goes from loading/placeholder to loaded

As you can imagine, most of these state changes benefit from transition animations (scale, translate, opacity). We just add linear interpolation to a discrete change.

Can you think of a good reason to use different techniques and APIs to implement route transition animations and button state transitions? I can't.

Once you think about using gestures for continuous transitions (swipe to go back on iOS), it makes even more sense to think of these components as physical overlapping sheets of material, with their own weight, inertia, grip/transition/friction, rails, anchors, springs.

Consider these interactions:

- swiping from the edge to reveal a side burger menu

- swiping from the edge to reveal the previous page

- swiping down to dismiss a bottom sheet

- scrolling down to reveal additional list items

- swiping horizontally to reveal actions under a list item

- dragging horizontally to move the thumb of a slider around

- pinching to zoom-in on a picture

These are all types of continuous navigations. They don't require animations because you're continuously animating them using touch. These interactions should be easy to implent. Programmatic discrete state changes should automatically infer transitions based on physical characteristics of these materials.

What's a route? Should the currently selected tab be part of the route? Should the expanded/collapsed state of a widget be part of the route? Should the visibility of a popup be part of the route? Should the vertical scroll offset be part of the route? Should the zoom level of a map be part of the route? Should the open/close state of a burger menu be part of the route? I think the concept of a route doesn't make a lot of sense if it doesn't capture the entire state and history of a person's interaction with an app. I see no reason why the browser history/backstack should discriminate against different navigation patterns, and only store pages. Adding all state changes to the history makes it easy to use the back button to dismiss a popup, close a burger menu, close the keyboard, etc. Heck, all apps should have universal undo/redo functionality.

Another specific type of transition people are struggling with are shared element transitions. For example, you tap on a thumbnail and it seamlessly animates into a detail page with a larger version of that image. This is easier to do as a layout transition than as a route transition.

A last thing to keep in mind is that layouts don't need to immediately create and render all of their elements. We can use virtualization, to only materialize what is currently visible. For example, a list of 1000 items will only materialize the 10 or so items it can display at once, and will dynamically create/recycle items as the user scrolls. The exact same strategy can be used if we're stacking items on a Z axis. For example, we could create and render the top 2 items (so that the previous item is immediately visible in a swipe back to reveal scneario), and only create and render other items as they get closer to the top of the stack.


> What's a route?

A route is the thing that I send to another person or save myself so that the specific piece of information I am looking at can be found again. There is a decision that has to be made, though. My selected text is never part of a route, but that might be the information I want to share. Conversely, when I open a hamburger menu to click the share button, the hamburger menu's open state is never the information I want to share.

Another consideration: does the route encompass the concept of the information being displayed, or the actual information? We typically solve this with a "permalink", where one route represents the concept (feed, newest, etc) and the permalink represents the specific information.


Interesting! Are there any resources or books that you might share that delve into this kind of reasoning towards UI engineering?


I would be interested as well.

For now, it's just a bunch of things I figured out along the way.


Much thanks to Dan for taking the time to write this up. UI dev is a deep and highly technical field, but we’re often inundated with juniorish programmers because it’s often the entry point for folks coming out of school or boot camps. We need folks writing more about these kinds of principles and less about a new way to reinvent the wheel


Absolutely. https://en.wikipedia.org/wiki/Invented_here

The name of the game is failure aversion. Frameworks and NPM packages for everything. When I interview JavaScript or UI developers this is my first discriminator. Why write original code or reinvent the wheel when somebody else has your simple solution behind 50mb of external code that you didn't write? If you are that fearful coward who believes in not writing original code I don't want to work with you. Have a nice life and go work somewhere else. I would rather work with somebody willing to take a chance on problem solving.

If you are going to down vote please mention why. Don't be a troll. Hacker News tries very hard to not be an echo chamber.


You're being toxic by calling people "cowards" for simply using a package instead of writing it themselves which reduces the amount of code they need to directly maintain. Displays a complete misunderstanding of the tradeoffs involved as well as being rude.

And what's funny is that if you hadn't been a jerk the original point of Frameworks and NPM Packages for everything is a good point and probably would have been well received. Because, the fact is that, sometimes it IS better to do it yourself instead of using the package and most wouldn't. Sometimes.


[flagged]


Why is it people think being "honest and transparent" means being rude and mean? Tact is also a useful skill.


Because sensitive people are too selfish to tell the difference. The world isn’t always smiles, hearts, and rainbows there to please you. Directness isn’t condescension unless you are a child. Sometimes honesty really does mean telling people what they don’t want to hear.

Tact isn’t a form of antidepressant. Tact is the means to account for the intention of offense without regard for actual offense. Overly sensitive people may never see that distinction.

People who figure these things out early tend to live happier and more fulfilled lives.


> The world isn’t always smiles, hearts, and rainbows there to please you.

Of course it isn't, but that does not grant one an implicit licence to be uncouth.

> Directness isn’t condescension unless you are a child.

Directness isn't condescension, period. That, however, does not mean you can mix condescension with directness and call it plain directness.

> Sometimes honesty really does mean telling people what they don’t want to hear.

Then please, by all means, do just that, without resorting to improper name-calling.

----

You know what, I'm going to stop countering your points one-by-one, and try to talk to your central idea.

I agree with you, at a central level. I wouldn't hire people who cannot write their own code either. I, too, would rather work with people who can and want to solve problems than merely put pegs in holes with assistance from the likes of npm and SO.

But I wouldn't call them cowards. Because that would be mis-characterization, at best; and hyper-generalization, at worst.

Even if you meant to call someone risk-averse, or fearful, using the word 'coward' is more than being direct. 'fearful' is direct and sufficient, as is 'risk-averse', but if you couldn't settle there and had to reach as far as 'coward', that suggests you wanted to use the extra force that comes with that word. Thus, condescensional offence.

And then you continue in this path, characterizing those who wouldn't respond to you as lacking "balls", everyone who disagrees with your choice of words as "insecure", and everyone who downvoted you as "JavaScript developers" with "shattered hearts", those suggesting your "honest and rude" words lack tact as "overly sensitive" people.

Lastly, I'll leave you with the suggestion that people in this world lead happier and even more fulfilled lives without resorting to even the slight force you're employing here.


> Directness isn't condescension, period.

Then we wouldn't be having this conversation. This conversation is here because people are offended I used the word coward and not at anybody specifically. I call them cowards because the behavior stems from intimidation. Everybody has fear, but its how people respond to it that determines bravery/cowardice. That said it isn't a surprise that cowards would be angrily offended at the mere thought of such a characterization even when not directed at them.

You have no idea how many horror stories I have heard from legal that boils down to my boss is mean. After further investigation more than 90% of the time the person making the complaint needed a mean boss because they were a piece of crap.

> And then you continue in this path, characterizing those who wouldn't respond to you as lacking "balls"

Yes, people who down vote for a minor disagreement of opinion or because their mortal soul was shredded apart by their deeply profound state of offense don't understand what the down vote is for. It isn't there to reinforce an echo chamber. The down vote is there to push down comments that are completely outside the conversation at hand or that demonstrate bad behavior. The big tears of sensitive people isn't an indication of bad behavior.

When I down vote a comment I always reply saying why I am doing so unless somebody else has already said it for me. It is the mature courteous thing to do.


I responded to you, not because of any of the reasons you mention, but because I genuinely wanted to point out to you your mistakes. It appears I have done a poor job of it, and I'm unlikely to succeed with further tries.

So I won't try very hard, and you might pardon me for the brevity:

1. Flight can be a perfectly sensible alternative to fight, and isn't always cowardice.

2. Fear isn't the only possible reason to avoid an endeavour.

3. Not every offence is taken angrily.

4. Not every offence is taken for the same reason.

5. Taking offence isn't exclusive to any single group, let alone "cowards".

6. I am a "mean" boss who also has to deal with people you dislike dealing with, and even I agree I am being uncouth when I call someone names because of their ineptitude, or other technical reasons.

7. The downvotes you are receiving aren't necessarily only from people who fall into those two extreme characterizations you describe.

8. There are no big tears here, only downvotes and people trying to talk to you.

9. Explaining every downvote due to bad behaviour gets tiring, eventually; though, here I am, and I haven't even downvoted you, yet.


> Explaining every down vote

This is written as an excuse of laziness, which doesn’t makes sense. Clearly there is the energy to become emotionally responsive. That is the nature of an echo chamber, destroy that which is disagreeable for comfort.

The rest of your points are all assumptions and stereotypes to qualify bad behavior. If you really merely disagreed with an opinion you would ignore it. There is something more at play if you feel the need to silence or destroy an opinion.

While I understand this all stems from immaturity and a vain need to somehow qualify it I will leave you with this:

https://www.bartleby.com/130/2.html


1. Energy to downvote < Energy to explain.

2. Discouraging bad behaviour != Echo chamber

3. Rest of my points: specific refutals of your mistakes.

4. I didn't even disagree with your opinion, and I'm telling you this for the third time now. I didn't even downvote you, though I want to, especially now.

The very first site guideline about comments says "Be civil". You break that, complain about downvotes (which breaks another site guideline), and when someone tries to reason with you, you up the snark (which breaks the first guideline), and refuse to see/read anyone's point of view except your own.

Forget your misguided ideas that all you presented here is an opinion, and that every one who disagreed with your comment is fragile, thin-skinned, and thin-skilled, and examine just your behaviour here. Are you really, truly, surprised anyone wants to discourage such behaviour around here?


Your down voting without an explanation is how you exercise the echo chamber. The logic of your thinking makes sense and is agreeable, but you aren't putting the pieces together correctly. You may not be able to see it due to an informal bias I have started to study.

Because the behavior I am seeing here is also frequently seen offline as well I am writing a paper on it for my coworkers. It is pretty interesting stuff to research.

https://en.wikipedia.org/wiki/Sensory_processing_sensitivity

Essentially, SPS appears to be a form of advanced processing in the brain. The brain of a SPS person will process certain stimulus much faster and aggressively than a common person resulting in a deep emotional experience. The research indicates 15-20% of adults may fall into this description. The positive result of this scenario is an intrinsically deep set of experiences from an exceedingly minor trigger.

The primary negative result is a loss objectivity. A stimulus that results in a deep emotional experience is distracting to anybody. Such a distraction may likely effect all adults similarly. The difference here is whether the person is sufficiently triggered by a given stimulus. The given distraction warrants a response at cost to a broader consideration for the given subject or a wider distribution of inputs.

The tragedy of this is that adults cannot properly self-regulate their behavior when compelled to a strong emotional state. This is problematic because emotional equilibrium is what allows the adult brain to self-reflect on its behavior and apply controls as necessary to adjust the behavior. The self-regulation generally occurs as the emotional state cools over a brief time period. If a SPS person is more deeply and frequently compelled to a deep emotional state they likely cannot achieve the necessary modification controls present in the behavior of other adults.

The research also indicates a SPS person may pause on trivial things to allow for deeper processing of the resulting emotional state or triggering stimulus. In social settings this would appear awkward as the timing and observed delays would appear strange followed by a response, even if not spoken, that other people may not well understand.

Conversely I occupy the opposite end of this spectrum of abnormal. I am hyper-objective, which comes with its own sets of pros and cons. Hyper-objectivity is generally extremely rare yet blessed for strengths of analysis and logic. People with this sort of personality are often, and undeservedly, considered to be smarter than average when such assumptions are grossly inaccurate. These people will frequently analyze common things to a degree of specificity most people generally don't care about.

The cons of a hyper-objectivity personality type is apathy. Since empathy is a deep form of listening an analysis hyper-objective people are great at it, but this is not reflected in their behavior. Instead all that most people see is that hyper-objective people don't care about emotions, which is mostly accurate. People like this area completely aware of this and how weird it is, which results in some abnormal decisions. It is easy to use empathy as a weapon to manipulate people or crush them with their own emotional states, and that is certainly an anti-social behavior. Hyper-objective people can modify their own behavior in response to social stimulus with far too great of ease which could appear somewhat sociopathic.

The general lack of regard for emotions has the interesting side-effort of an anti-Dunning-Kruger effect. Instead of an incompetent person who feels superior to their peers a hyper-objective person may in fact exhibit superior work performance but incorrectly believe themselves to be inferior despite evidence to the contrary. The resulting bias then compounds the problem by wondering why you can complete a hard task and your coworkers cannot. If you suck then they must super-suck, which isn't correct at all.

If you are an SRS person I recommend pointing that out to somebody you are close to offline so that they may provide you pointers when things get weird.


It's important to understand the root of the arguments for and against both "not invented here" and "invented here". Both can be considered a poor mindset because it's thinking in absolutes. Certainly, everyone can agree there is no need to "write original code" for every problem you encounter. If there are libraries behind which there are teams of strong engineers (React, Angular), who work specifically on that problem and exhaustively test and iterate on it, why would you take time and productivity away from the development of your product in order to create your own solutions?

On the other hand, if the existing libraries/solutions don't fit your use cases and you spend more time customizing them to work than you do developing your product, or the existing solutions are out of date, unsupported, or appear slapped together, it would certainly be worth working on your own solution to see if you can find a better way. Likewise, experimenting with custom reimplementation is a great way to understand the problems libraries solve in an in-depth way, as Dan suggests, and you may even end up with something better. You just need to be realistic about whether it makes sense to make it yourself or utilize what exists.


I suppose the common problem is of when to apply abstraction. There are times when abstraction is necessary to prevent duplication of effort and other times when it is convenience layer. These qualities are not the same and I find people generally cannot differentiate them, even outside of programming. The crux of that problem is confusion is confusion of a person’s own level of effort versus actual effort auctioned by an artifact.


From the part on accessibility:

> But we also need to make it easy for product developers to do the right thing. What can we do to make accessibility a default rather than an afterthought?

Yes! We accessibility advocates have been wishing for this for decades. In the context of web development, application developers should rarely, if ever, have to reach for the ARIA role attribute, because they should be able to re-use existing rich widgets rather than implementing custom ones. I'm hoping that Web Components-based toolkits like the new Ionic 4 will help here. Then project boilerplates should include some kind of accessibility testing by default, so developers will have to go out of their way to ignore it.


For React, https://ui.reach.tech/ is a project that attempts to help that.


React Native for Web also aims to raise the bar.


The "make accessibility a default rather than an afterthought" phrase was exactly what I've been looking for. I'm currently summarizing my evaluation of over 20 UI frameworks for an upcoming blog post. It's a pity to have even accessibility-aware frameworks like Bootstrap going with inaccessible colors by default - on purpose [0]. Yes, it is not hard to customize those colors, but many people simply go with the defaults, leading to an inaccessible site.

[0] https://getbootstrap.com/docs/4.1/getting-started/accessibil...


The way I have seen accessibility introduced successfully is to make the QA the accessibility advocates. Train them up on the WCAG 2.0 guidelines and build their confidence up. Its ok when they open an accessibility defect that ends up being a false positive. Ensure the QA has the tools in place to get into the browser's developer tools and look at the structure of things and how various UI controls are accessed.

As the QA gain mastery over accessibility the developers are on the hook to implement the proper controls. It is easy to get upset when developers look like a bottleneck and hold up releases, but nobody gets upset when QA identifies potential liability and a shitty product.


I have had this in mind for years but never followed through with it. Do you know of any groups currently working on the problem?

The dream is framework agnostic Web Components that "just work" in a modern browser. You can make it happen with a bit of polyfill javascript. My worry has been spending a bunch of effort making them and then having to maintain separate sets for all the different frontend "web component-ish" frameworks so that people would actually use them.


Framework agnostic is hard because building a complex UI toolkit always needs some underlying way of binding model control logic and rendering together.

So if you don’t end up using a modern popular framework, you have already invented one that is undocumented.


The Entropy section is great. I’ve been experimenting with ways of leveling down entropy in React by using this little helper to express decision trees in your render function. Not totally happy with it yet though. https://github.com/scottyantipa/photonic


Your approach seems to create an abstraction on top of react where the next step could only be a DSL. This is a first indicator for me that the developer will loose those programming capabilities that react carefully tries to retain and I'm not sure if this is a good approach. I think this doesn't solve the problems of entropy that Dan mentioned in his post and please let me explain a bit further...

I think of JSX as a declarative way to redefine HTML and to make it fit to the desired design. Sometimes it's quite possible to find a common behavior of a component and it can become part of a library, but that's not always the case. I like to use react components as a slim layer and prefer to do the work (side-effects, data processing etc.) somewhere else (purely functional modules, redux-saga etc.). My goto-rule to create components became the following: "if I start naming my component with something else than with layout-related terms, then I'm using react as the wrong tool for the job". React is just the view layer, a data processor that puts the view-data into the DOM. And as such a view layer it's sole responsibility is to render conditionally, the responsibility that your linked library tries to extract. I personally prefer to read through early returns in the render method of a component as opposed to following the order of an array.

There are behavioral UI problems in an SPA that are not related to the DOM itself. React can't do much about that and I think the problem of entropy stems from there: In the app at work one of the biggest struggles was to keep a global lock for a modal. This modal can pop-up to the user and show a notification, confirmation, form etc. Modals should never overlap with each other, give the user all the time she needs, but should also follow priorities (i.e. a push-notification to cancel the session being one of the highest). The difficulty in managing this lock was that actions from everywhere can pop-up that modal, be it a push notification over websockets, functions called from the native side, failed/succeeded API calls or behavioral inputs that should work differently all over the app (i.e. barcode scanning a product will search for a product in one view, will add it to the cart in another, or will prompt for a follow-up action if no product could be found). Thanks to redux-saga, we can make use of actions not solely to update the store state, but can additionally (if ignored by the reducers) use them like a message bus in a concurrent system. So with redux-saga (and inspired by elixir) I could make use of the actor-pattern and build a supervisor saga that keeps this behavior maintainable, but there is way to much complexity in this.

Point being, react and redux do a great job at managing entropy as long as it's used in the correct way. I think each component that doesn't take props can in fact be the starting point into it's own (micro-)application. But I think the biggest difficulties are external influences for an SPA - those interconnected influences that make the web so attractive for an application.

EDIT: typo


Wow, this page made my top icon tray in Android turn pink. What is the CSS property responsible for this? (My guess would be: background-color)


This would be the "theme-color" meta tag[1] which I think only works in Chrome on Android, at the moment.

[1]: https://html.spec.whatwg.org/multipage/semantics.html#meta-t...


<meta name="theme-color" content="your-color-here">


On the principle that you can solve every problem in computer science by adding a layer of indirection, we solved all the problems listed in that Dan Abramov article by adding a few layers of indirection.

You didn’t write a fetch ever, you subscribed to a data feed from the data feed repository.

To prevent redundant fetches, the data feed was served by a cache, which then filled cache entries by doing the fetch.

Doing a POST invalidated the whole cache.

Which triggered fetches of everything, but only once/API. The fetch return pushed out to all the subscribers. The subscription was driven by the JSX, so only visible items had active subscriptions. Essentially, you wrapped your UI with the fetcher component, during your JSX display tree into a data dependency tree.

Optimizing the cache invalidation was a problem we ignored for later, because the fetch overhead wasn’t too bad, and we went from doing 40 fetches/page without the cache layer to 10, so the back end never noticed.


I wonder how the author would approach the "infinite scroll" problem.


That’s the fun one because it’s at the intersection of several of these problems.


Would love to hear your thoughts as well on this! Especially with regards to remembering scroll position between route navigations in an infinite scroll component that fetches its content asynchronously.


i've thought about this some.. seems to me the url/hash route needs to tell us both the app state and UI state, but most routers dont do this


You can save arbitrary data when you push a new state with the HTML history API. This data is restored when you go back, and it does not affect the URL. Both React Router and Reach Router allow you to programmatically navigate and use this capability. I've used it on multiple occasions to restore both app state and UI state when navigating back. It's a little more work, of course. You can't just link to a simple URL. You need an event listener that creates the state object and tells the router to navigate manually. You may also need to ensure that any back buttons in your UI actually go back into the history instead of adding a new entry. It would be nice if it were easier, but once you get it working, the level of polish compared to most web apps makes it feel worth the effort.


Entropy. Good point. I don't think many brought up that issue, it's either ignored (we know what we're doing) or coped with when it's too late (we knew what we're doing).

Entropy handling should be the goal for 2019 UI/Frontend engineering.


I think the reduction of the cognitive load of entropy was what propelled React (and specifically its innovation of the Virtual DOM) to be the most widely-used front-end framework. Being able to write a function transforming state to a description of the view, without having to worry about getting from whatever the view currently looks like to what you want it to look like, is a huge advantage.

It can certainly still be greatly improved, but I think we've already made great strides in entropy handling on the front-end in recent years.


> innovation of the Virtual DOM

Virtual DOM is rather a desperate move - to support components life time constructor/destructor events componentWill/DidMount / Unmount.

What if you will be able to define in standard HTML/CSS something like this (as it is supported natively in Sciter)

   // css
   div.mycomponent {
     behavior: MyComponent url(components.js);
   } 
   // script in components.js
   class MyComponent : Element {
     function attached() { /*constructor*/ }
     function detached() { /*destructor*/ }
     … other custom component specific methods … 
   }
 
With that simple mechanism you don't need virtual DOM and its overhead at all. Component binding requires only inclusion of that CSS.


So let's say that that's a component that displays a user's username - how do you make sure that its view is always up-to-date, whether the user is logged in, logged out, or another user's logged in?

Virtual DOM is not just about componentWill/DidMount/UnMount. It's when updating the view when a component's inputs (props) change that Virtual DOM shines.


If I am not mistaken, Houdini will allow similar features.

But it is still far away from standardization.


> still far away from standardization.

Sciter uses this feature almost 10 years.

All these 10 years we have libraries of reusable components and so no need for React.

Same is about flexbox and grid: https://terrainformatica.com/2018/12/11/10-years-of-flexboxi...


Well, native has it even longer (WPF behaviors and grid), but I have to put up with Web for most of my projects.


Sciter was used in production 1 year before WPF appearance: https://sciter.com/sciter/sciter-vs-wpf/


looks pretty close to the web components api. specifically "customized built-in elements" which use the `is` attribute on a regular element to add custom beavior.


My experience with this type of intricate UI engineering is that often agencies and UI/UX consultants sell this romantic idea and fail miserably to execute on it. It is either that the agency way over extends itself on the theory that they leave no room in the budget for proper execution or the front end devs do not have the experience to do it right.


One example of hard problem in UI Engineering: Accessibility. In github markdown editor, we can format code by wrapping with ``` . But after that, what happens if we press Tab ? Should the code indent, or the focus move outside of the editor ? It's confusing and annoying currently to move the focus outside of editor.


Is there any book that actually cover those problems from a technical view point? Not from a UX/designer perspective


The points are nice, it would be amazing if the 90% of the web wasn't dumpster fire of inconsistent slow and hard to use sites.


Re: Entropy I was looking at a responsive thermostat component with an on/off control with a 4-state on/off button for a web miner. It had a ticker which displayed two metrics updating at time and framerate intervals intervals with time-based device status interstitials in multiple languages. XState is great for testing these situations.


UI enginerring is what is to be taught in universities




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: