
The Elements of UI Engineering - danabramov
https://overreacted.io/the-elements-of-ui-engineering/
======
miguelrochefort
I've been building consumer apps professionally since 2013 on iOS, Android,
Windows Phone and the web. I also spent a few years designing and implementing
UI frameworks. I've had to deal with all of the problems listed in Dan's
article (with and without frameworks).

Over the years, I have grown convinced that designing and implementing UI by
hand simply doesn't scale. There are too many things to consider, too many
different users, too many preferences, too many possible states. Responsive
design isn't just about screen sizes anymore, it's also about the user's
language, culture, disabilities, input, context, preferences, connectivity,
knowledge, focus, etc.

We can't expect every restaurant, bank, festival and airlines to implement
their own apps, and consider all of the above. You won't find a dark theme in
the Domino's app. Why do we tolerate these compromises in the name of
branding? Why should UIs be tightly coupled with the services and data? Why
don't we have general purpose clients?

I think the job of service providers should be to semantically annotate their
data, so that a general purpose client can dynamically render it. All of these
UI concerns would only have to be dealt with once, and we all would be able to
sleep at night. Just let business people do business, and let UI/UX people do
design.

~~~
aasasd
A related issue is that the means of making web UIs are too low-level. There's
no standard ready-made widget kit like on desktops.

HTML and CSS started as a solution for publishing text-based content, like the
olde magazines but on the web. For that they are splendid: text rendering and
basic input handling are taken care of, and wide variety of output devices
were supported since day one (HTML 2.0 without tables works excellent on
phones).

Then web 2.0 and web apps happened, web idioms began to change, all of
people's activity with computers started moving to the web. And it turned out
that to create complex UIs for all of that you have to fiddle with rather low-
level primitives and handle interactions between them, because outside of text
layout HTML primarily knows about divs and a handful of input widgets.

This may be good because high-level widgets will be developed independently
and will evolve faster (and maybe better) than if they were built into
browsers. But in the meantime we have to live with the chaos that we have.

~~~
simion314
>This may be good because high-level widgets will be developed independently
and will evolve faster (and maybe better) than if they were built into
browsers. But in the meantime we have to live with the chaos that we have.

I don't think that having some good built-in widgets (like a better dropdown,
menus, DataGrid) would prevent third parties creating their own custom
versions but it would help 99% of users that need the basic stuff (like a
dropdown that can have icons)

------
pdrummond
I've been building UI for years now and all these points highlighted by Dan
are spot on. Great post!

I've been thinking lately about the "design system" trend that is becoming
more and more popular where companies want more control over styling and
behaviour to make their UX unique to their brand as well as consistent across
all their products. Ready-made 3rd party components like the excellent react-
select don't really fit into this world as general purpose components like
this naturally have to make some choices regarding styling and behaviour. No
matter how customisable they are, in the end they rarely integrate well into a
UI based on a design system.

This makes me feel like the abstraction is all wrong. Rather than aiming for
fully-functional, out-of-the-box components that cater for all manner of
general purpose requirements, how about a library/framework that focuses on a
set of primitive components that deal with the lower level concerns like
layout, scrolling, positioning, etc. Maybe the abstraction could be more like
"composable shapes" than "ready-made components" or something along those
lines.

With this approach, you wouldn't ever start out with something like a ready-
made Autocomplete component, for example. Instead, you would always build a
custom Autocomplete and have complete control over styling and behaviour, but
it would be built from solid foundations using some form of the "shape"
abstraction. That way, you can focus on making the component's styling and
behaviour consistent with the design system without having to worry so much
about layout, accessibility, scrolling, positioning, etc - as all of these are
taken care of by the framework.

~~~
danabramov
That's exactly how some of the newer libraries in React ecosystem work btw.
For example, Downshift is that "DYI Autocomplete" — it handles a11y and
mechanics but you can compose any kind of behavior and styling out of its
primitives.

[https://github.com/paypal/downshift](https://github.com/paypal/downshift)

I'm glad to see this trend.

~~~
dandellion
> a11y

I had to look that one up. I find it amusingly ironic that an accessibility
initiative uses an abbreviation that renders the name incomprehensible.

~~~
strokirk
It's certainly ironic, but most places I've been to pronounces it like the
word "ally". At least it's easier than the even more obscure "i18n" and "l10n"
for "internationalization" and "localization".

------
thenanyu
Much thanks to Dan for taking the time to write this up. UI dev is a deep and
highly technical field, but we’re often inundated with juniorish programmers
because it’s often the entry point for folks coming out of school or boot
camps. We need folks writing more about these kinds of principles and less
about a new way to reinvent the wheel

~~~
austincheney
Absolutely.
[https://en.wikipedia.org/wiki/Invented_here](https://en.wikipedia.org/wiki/Invented_here)

The name of the game is failure aversion. Frameworks and NPM packages for
everything. When I interview JavaScript or UI developers this is my first
discriminator. Why write original code or reinvent the wheel when somebody
else has your simple solution behind 50mb of external code that you didn't
write? If you are that fearful coward who believes in not writing original
code I don't want to work with you. Have a nice life and go work somewhere
else. I would rather work with somebody willing to take a chance on problem
solving.

If you are going to down vote please mention why. Don't be a troll. Hacker
News tries very hard to not be an echo chamber.

~~~
0xCMP
You're being toxic by calling people "cowards" for simply using a package
instead of writing it themselves which reduces the amount of code they need to
directly maintain. Displays a complete misunderstanding of the tradeoffs
involved as well as being rude.

And what's funny is that if you hadn't been a jerk the original point of
Frameworks and NPM Packages for _everything_ is a good point and probably
would have been well received. Because, the fact is that, sometimes it IS
better to do it yourself instead of using the package and most wouldn't.
Sometimes.

------
miguelrochefort
Navigation

Whenever we can't fit everything on the screen at once, we use some of the
patterns below:

\- scroll view

\- virtualized list

\- tabs

\- drawer

\- master-detail

\- page navigation

\- modal navigation

\- alert

\- tooltip

\- combo box

\- collapsible

\- carousel

\- gallery

It wouldn't make sense to use any of these patterns if we had ∞ sized
displays.

Would you call all of the above patterns "navigation"? Why not? Isn't
navigation just a way to reach content that isn't immediately accessible? Why
don't you think of scrolling through a list as some sort of navigation? I
think you should.

It really helps to re-frame all of the above patterns as simple layout
strategies. Layouting is about putting content where it belongs. Whether that
content is visible or not (covered, collapsed, out of bounds) doesn't really
matter.

Let's consider the classic master-detail example that so many people struggle
with:

    
    
      Tablet (stacked on X axis)
    
      +-----+---------+
      |     |         |
      |  M  |    D    |
      |     |         |
      +-----+---------+
    
      --------X--------
    
      Phone (stacked on Z axis)
    
           +-----+       
      +-----+    |      /
      |     | M  |     /
      |  D  |    |    Z
      |     |----+   /
      +-----+       /
    

The only difference between these two examples is the stacking axis. That's
it. It's the only thing that should change when resizing a window. You don't
need to recreate a completely new layout using frames and pages and what not.
Re-framing the problem just makes everything much easier. Navigation is just
layouting.

~~~
danabramov
What do you make of route transition animations?

~~~
miguelrochefort
There's no reason for route transition to be a special case of state
transition.

Here are some examples of state transitions that can be augmented with
animations:

\- reorder items in a lost

\- add/remove an item from a list

\- expand/collapse an element

\- hover/press/disable a button

\- show a popup

\- show an inline error message

\- open a drop down menu

\- open/close a burger menu

\- change the burger menu button to a back button

\- change the scroll offset to a new item/anchor

\- increase the height of a text field

\- show/hide the top menu/navigation bar

\- add an item to the cart (and the count badge appears/increases)

\- increase the value of a progress bar

\- image goes from loading/placeholder to loaded

As you can imagine, most of these state changes benefit from transition
animations (scale, translate, opacity). We just add linear interpolation to a
discrete change.

Can you think of a good reason to use different techniques and APIs to
implement route transition animations and button state transitions? I can't.

Once you think about using gestures for continuous transitions (swipe to go
back on iOS), it makes even more sense to think of these components as
physical overlapping sheets of material, with their own weight, inertia,
grip/transition/friction, rails, anchors, springs.

Consider these interactions:

\- swiping from the edge to reveal a side burger menu

\- swiping from the edge to reveal the previous page

\- swiping down to dismiss a bottom sheet

\- scrolling down to reveal additional list items

\- swiping horizontally to reveal actions under a list item

\- dragging horizontally to move the thumb of a slider around

\- pinching to zoom-in on a picture

These are all types of continuous navigations. They don't require animations
because you're continuously animating them using touch. These interactions
should be easy to implent. Programmatic discrete state changes should
automatically infer transitions based on physical characteristics of these
materials.

What's a route? Should the currently selected tab be part of the route? Should
the expanded/collapsed state of a widget be part of the route? Should the
visibility of a popup be part of the route? Should the vertical scroll offset
be part of the route? Should the zoom level of a map be part of the route?
Should the open/close state of a burger menu be part of the route? I think the
concept of a route doesn't make a lot of sense if it doesn't capture the
entire state and history of a person's interaction with an app. I see no
reason why the browser history/backstack should discriminate against different
navigation patterns, and only store pages. Adding all state changes to the
history makes it easy to use the back button to dismiss a popup, close a
burger menu, close the keyboard, etc. Heck, all apps should have universal
undo/redo functionality.

Another specific type of transition people are struggling with are shared
element transitions. For example, you tap on a thumbnail and it seamlessly
animates into a detail page with a larger version of that image. This is
easier to do as a layout transition than as a route transition.

A last thing to keep in mind is that layouts don't need to immediately create
and render all of their elements. We can use virtualization, to only
materialize what is currently visible. For example, a list of 1000 items will
only materialize the 10 or so items it can display at once, and will
dynamically create/recycle items as the user scrolls. The exact same strategy
can be used if we're stacking items on a Z axis. For example, we could create
and render the top 2 items (so that the previous item is immediately visible
in a swipe back to reveal scneario), and only create and render other items as
they get closer to the top of the stack.

~~~
ggpsv
Interesting! Are there any resources or books that you might share that delve
into this kind of reasoning towards UI engineering?

~~~
miguelrochefort
I would be interested as well.

For now, it's just a bunch of things I figured out along the way.

------
mwcampbell
From the part on accessibility:

> But we also need to make it easy for product developers to do the right
> thing. What can we do to make accessibility a default rather than an
> afterthought?

Yes! We accessibility advocates have been wishing for this for decades. In the
context of web development, application developers should rarely, if ever,
have to reach for the ARIA role attribute, because they should be able to re-
use existing rich widgets rather than implementing custom ones. I'm hoping
that Web Components-based toolkits like the new Ionic 4 will help here. Then
project boilerplates should include some kind of accessibility testing by
default, so developers will have to go out of their way to ignore it.

~~~
danabramov
For React, [https://ui.reach.tech/](https://ui.reach.tech/) is a project that
attempts to help that.

~~~
ggurgone
React Native for Web also aims to raise the bar.

------
escot
The Entropy section is great. I’ve been experimenting with ways of leveling
down entropy in React by using this little helper to express decision trees in
your render function. Not totally happy with it yet though.
[https://github.com/scottyantipa/photonic](https://github.com/scottyantipa/photonic)

~~~
aastronaut
Your approach seems to create an abstraction on top of react where the next
step could only be a DSL. This is a first indicator for me that the developer
will loose those programming capabilities that react carefully tries to retain
and I'm not sure if this is a good approach. I think this doesn't solve the
problems of entropy that Dan mentioned in his post and please let me explain a
bit further...

I think of JSX as a declarative way to redefine HTML and to make it fit to the
desired design. Sometimes it's quite possible to find a common behavior of a
component and it can become part of a library, but that's not always the case.
I like to use react components as a slim layer and prefer to do the work
(side-effects, data processing etc.) somewhere else (purely functional
modules, redux-saga etc.). My goto-rule to create components became the
following: "if I start naming my component with something else than with
layout-related terms, then I'm using react as the wrong tool for the job".
React is just the view layer, a data processor that puts the view-data into
the DOM. And as such a view layer it's sole responsibility is to render
conditionally, the responsibility that your linked library tries to extract. I
personally prefer to read through early returns in the render method of a
component as opposed to following the order of an array.

There are behavioral UI problems in an SPA that are not related to the DOM
itself. React can't do much about that and I think the problem of entropy
stems from there: In the app at work one of the biggest struggles was to keep
a global lock for a modal. This modal can pop-up to the user and show a
notification, confirmation, form etc. Modals should never overlap with each
other, give the user all the time she needs, but should also follow priorities
(i.e. a push-notification to cancel the session being one of the highest). The
difficulty in managing this lock was that actions from everywhere can pop-up
that modal, be it a push notification over websockets, functions called from
the native side, failed/succeeded API calls or behavioral inputs that should
work differently all over the app (i.e. barcode scanning a product will search
for a product in one view, will add it to the cart in another, or will prompt
for a follow-up action if no product could be found). Thanks to redux-saga, we
can make use of actions not solely to update the store state, but can
additionally (if ignored by the reducers) use them like a message bus in a
concurrent system. So with redux-saga (and inspired by elixir) I could make
use of the actor-pattern and build a supervisor saga that keeps this behavior
maintainable, but there is way to much complexity in this.

Point being, react and redux do a great job at managing entropy as long as
it's used in the correct way. I think each component that doesn't take props
can in fact be the starting point into it's own (micro-)application. But I
think the biggest difficulties are external influences for an SPA - those
interconnected influences that make the web so attractive for an application.

EDIT: typo

------
obastard
On the principle that you can solve every problem in computer science by
adding a layer of indirection, we solved all the problems listed in that Dan
Abramov article by adding a few layers of indirection.

You didn’t write a fetch ever, you subscribed to a data feed from the data
feed repository.

To prevent redundant fetches, the data feed was served by a cache, which then
filled cache entries by doing the fetch.

Doing a POST invalidated the whole cache.

Which triggered fetches of everything, but only once/API. The fetch return
pushed out to all the subscribers. The subscription was driven by the JSX, so
only visible items had active subscriptions. Essentially, you wrapped your UI
with the fetcher component, during your JSX display tree into a data
dependency tree.

Optimizing the cache invalidation was a problem we ignored for later, because
the fetch overhead wasn’t too bad, and we went from doing 40 fetches/page
without the cache layer to 10, so the back end never noticed.

------
amelius
Wow, this page made my top icon tray in Android turn pink. What is the CSS
property responsible for this? (My guess would be: background-color)

~~~
coogie
This would be the "theme-color" meta tag[1] which I think only works in Chrome
on Android, at the moment.

[1]:
[https://html.spec.whatwg.org/multipage/semantics.html#meta-t...](https://html.spec.whatwg.org/multipage/semantics.html#meta-
theme-color)

------
amelius
I wonder how the author would approach the "infinite scroll" problem.

~~~
danabramov
That’s the fun one because it’s at the intersection of several of these
problems.

~~~
2Pacalypse-
Would love to hear your thoughts as well on this! Especially with regards to
remembering scroll position between route navigations in an infinite scroll
component that fetches its content asynchronously.

~~~
djKianoosh
i've thought about this some.. seems to me the url/hash route needs to tell us
both the app state and UI state, but most routers dont do this

~~~
joshtynjala
You can save arbitrary data when you push a new state with the HTML history
API. This data is restored when you go back, and it does not affect the URL.
Both React Router and Reach Router allow you to programmatically navigate and
use this capability. I've used it on multiple occasions to restore both app
state and UI state when navigating back. It's a little more work, of course.
You can't just link to a simple URL. You need an event listener that creates
the state object and tells the router to navigate manually. You may also need
to ensure that any back buttons in your UI actually go back into the history
instead of adding a new entry. It would be nice if it were easier, but once
you get it working, the level of polish compared to most web apps makes it
feel worth the effort.

------
nenadg
Entropy. Good point. I don't think many brought up that issue, it's either
ignored (we know what we're doing) or coped with when it's too late (we knew
what we're doing).

Entropy handling should be the goal for 2019 UI/Frontend engineering.

~~~
Vinnl
I think the reduction of the cognitive load of entropy was what propelled
React (and specifically its innovation of the Virtual DOM) to be the most
widely-used front-end framework. Being able to write a function transforming
state to a description of the view, without having to worry about getting from
whatever the view currently looks like to what you want it to look like, is a
huge advantage.

It can certainly still be greatly improved, but I think we've already made
great strides in entropy handling on the front-end in recent years.

~~~
c-smile
> innovation of the Virtual DOM

Virtual DOM is rather a desperate move - to support components life time
constructor/destructor events componentWill/DidMount / Unmount.

What if you will be able to define in standard HTML/CSS something like this
(as it is supported natively in Sciter)

    
    
       // css
       div.mycomponent {
         behavior: MyComponent url(components.js);
       } 
       // script in components.js
       class MyComponent : Element {
         function attached() { /*constructor*/ }
         function detached() { /*destructor*/ }
         … other custom component specific methods … 
       }
     

With that simple mechanism you don't need virtual DOM and its overhead at all.
Component binding requires only inclusion of that CSS.

~~~
pjmlp
If I am not mistaken, Houdini will allow similar features.

But it is still far away from standardization.

~~~
c-smile
> still far away from standardization.

Sciter uses this feature almost 10 years.

All these 10 years we have libraries of reusable components and so no need for
React.

Same is about flexbox and grid:
[https://terrainformatica.com/2018/12/11/10-years-of-
flexboxi...](https://terrainformatica.com/2018/12/11/10-years-of-flexboxing/)

~~~
pjmlp
Well, native has it even longer (WPF behaviors and grid), but I have to put up
with Web for most of my projects.

~~~
c-smile
Sciter was used in production 1 year before WPF appearance:
[https://sciter.com/sciter/sciter-vs-wpf/](https://sciter.com/sciter/sciter-
vs-wpf/)

------
cityzen
My experience with this type of intricate UI engineering is that often
agencies and UI/UX consultants sell this romantic idea and fail miserably to
execute on it. It is either that the agency way over extends itself on the
theory that they leave no room in the budget for proper execution or the front
end devs do not have the experience to do it right.

------
revskill
One example of hard problem in UI Engineering: Accessibility. In github
markdown editor, we can format code by wrapping with ``` . But after that,
what happens if we press Tab ? Should the code indent, or the focus move
outside of the editor ? It's confusing and annoying currently to move the
focus outside of editor.

------
nyadesu
Is there any book that actually cover those problems from a technical view
point? Not from a UX/designer perspective

------
ecp9
The points are nice, it would be amazing if the 90% of the web wasn't dumpster
fire of inconsistent slow and hard to use sites.

------
diaktifkan
Re: Entropy I was looking at a responsive thermostat component with an on/off
control with a 4-state on/off button for a web miner. It had a ticker which
displayed two metrics updating at time and framerate intervals intervals with
time-based device status interstitials in multiple languages. XState is great
for testing these situations.

------
73Solomon
UI enginerring is what is to be taught in universities

