
Short term usability is not the same as long term usability - ingve
https://nibblestew.blogspot.com/2020/06/short-term-usability-is-not-same-as.html
======
hinkley
I remember when I was starting out that people talked about this problem a
lot. But it's been years since I've heard it said unless I was the one saying
it.

I still routinely wonder if the right solution is to build up an 'action bar'
the way video games often do. Microsoft got into this neighborhood with the
Ribbon but I feel like they missed the target.

You graduate to more sophisticated options and the ones you use all the time
are mapped to whatever key _you_ want it to be. Add a hidden flag for setting
up a new machine or user where you jump straight to expert mode and you're
done. This costs the expert one more setting to memorize, but the payoff for
doing so is quite lucrative.

Shortcuts seem to work great on QWERTY but less awesome for everyone else.
Just let me set my own so I don't have to use an extra finger on Dvorak or a
Japanese keyboard.

~~~
mjevans
While that sounds like a great idea...

* How do these shortcuts remain consistent between applications?

* What's a system wide, task-type-wide, and program specific action?

* How do these persist across different versions of OS, application, devices, different ownership (work / home / etc), or otherwise follow the user?

* "X is broken" use the shortcut... wait what did you configure that as?

~~~
hinkley
That may be the other thing. Apps have gotten a lot bigger on average. How
many shortcuts are really consistent across applications? A dozen? That might
have been half of the shortcuts 20 years ago. Now that’s a quarter and nobody
bothered creating bindings for half of the menu entries.

Hell, Outlook gets it wrong and MS used to harp on this loudly. I can’t count
how many new folders I’ve created while trying to start an email.

------
allenu
I agree with the notion.

I find that a lot of less experienced devs I work with like to prioritize
"ease of use" in API design over other things, such as testability,
orthogonality, decoupledness, reversibility, etc. If an API is "easy to use"
from a client perspective, they often deem it a good one. API ease of use is
definitely important, but weighed against other constraints, which are more
fuzzy and more about long-term maintainability. Sometimes making an API
slightly harder to use (often requiring additional client knowledge about the
domain before using it) is worth the trade-off against ease of use since it
means it's easier to extend in the future.

It's definitely a skill to learn what helps long-term usability vs short term
usability.

I often go back to Rich Hickey's talk about Simple Made Easy when thinking
about this problem. [https://www.infoq.com/presentations/Simple-Made-
Easy/](https://www.infoq.com/presentations/Simple-Made-Easy/)

~~~
umvi
IMO "public" facing APIs should _always_ be easy to use and only require the
minimum information from the user necessary. An example of an outstanding
public API would be nlohmann's json library for C++[0].

Whether that API is merely a wrapper for an internal API that is more testable
(i.e. allows injection, etc.) or whatever is another matter.

[0] [https://github.com/nlohmann/json](https://github.com/nlohmann/json)

~~~
allenu
I think there can be debate on what is "minimum information". I'd also say
"easy" for one developer may be challenging for another developer if the
domain of the model is foreign to them.

A lot of frameworks require up-front knowledge to work with. To some, that's
not "easy", but it allows the client to do so much because what the framework
is providing is not simple.

In other places, the API can be dead easy because what it's providing is so
simple.

------
boomlinde
I agree with the sentiment in the headline, but want to offer a counter
example.

Consider Emacs vs. Notepad++, for the purpose of editing code. Emacs in this
example represents "maximal flexibility" for having a configuration interface
where almost every aspect of its function can be (re)programmed, and Notepad++
represents "maximal cooperation" for having a configuration interface and
limited toolset tailored to the task specifically at hand (editing code). I'm
not going to contribute code to either project; submitting patches for a
"maximally cooperative" system to adapt it to your use case is just an
advanced and inconvenient form of "maximal flexibility".

In my experience this has the opposite relationship to that described in the
article. Getting started with Emacs is a significant investment (as per the
article, "everything is possible but nothing is easy"), while Notepad++ is
pretty much pick-up-and-go out of the box, but over time the extensibility of
the former pays off in a better functionality/amount of work ratio.

There is an example of "maximal flexibility" in the article, but none for
"maximal cooperation", and I'd like to see one.

~~~
robbyt
I recently worked with a developer who proudly told me he has over 25 years
experience. Unfortunately, he wasn't a very good developer.

In talking to him, he told me he switched jobs on average every 3-5 years. And
I realized this was the problem. He didn't have 25 years of experience, he had
5 years of experience, 5 times over because he never went deep enough to
master his craft. He stuck with Notepad++, never bothered to understand topics
like hard tabs vs. soft tabs, never added automated linting or reformat to his
coding workflow, didn't bother writing tests (or testable code).

I'm sure many of us have worked with someone who's terminal is misconfigured,
has a messy PATH or their computer is in a general state of disarray. This is
an example of short-term usability. Long term assumes general mastery of a
tool/system... But this isn't always the case.

------
cjfd
The general point the author is making sounds true but I am sceptical
regarding any criticism against make. Building and/or handling dependencies
was basically a solved problem as soon as make was invented and all of the new
stuff in this area just seems plainly unneeded to me. Also, when people invent
new build systems one can end up with projects where one part is built using
one build system and another part is using another. Since things can depend on
each other in arbitrarily complex ways because of code generation this will
lead to build either too much or too little. In particular having a build
system for a particular language is such a strange concept.

~~~
BiteCode_dev
The problem with make, is that it's yet another DSL to learn.

Every single tool in your toolbox could introduce a new one, and it leads to
fatigue.

Task runner? New DSL (make)

Test runner? New DSL (robot)

Deployement? New DSL (Ansible)

Batch spawning? New DSL (tox)

Etc

Of course, dev in half of those DSL is a total pain, because, like with most
DSL, the tooling sucks: terrible completion, linting, debugging or composing
experience.

So people write/leverage tools they can use with their favorite language. And
why not? You have to install it anyway (make isn't default on windows, it's
not even installed on vanilla Ubuntu!).

When I need a make-like tool, I use doit
([https://pydoit.org/](https://pydoit.org/)).

Why?

It's in Python, the language of my projects. So I can use the same tooling,
the same libs, the same install and testing procedures. And so can people
contributing: no need to train them. Most devs don't know how to use make
(most of them are on Windows after all).

Using make adds zero benefit for me: doit just does what make does (ok, that
sentence is hard to parse :)). But make adds an extra step of asking to
install it while I can just slip doit among my other python project
dependancies. I have to google the syntax. I can't use tab to complete names,
of right click to get documentation. And if (actually when) I need to debug my
make file, God have mercy.

It's not against make. It just doesn't provide enough value to justify the
cost of it.

~~~
yjftsjthsd-h
> You have to install it anyway (make isn't default on windows, it's not even
> installed on vanilla Ubuntu!).

What are the odds that you're compiling software on Ubuntu and haven't
installed base-devel? I mean, it could happen, but it feels contrived.

~~~
BiteCode_dev
You assume make is only for compiling, but it's not.

It's just a task runner.

------
wruza
>project('tutorial', 'c') >gtkdep = dependency('gtk+-3.0') >executable('demo',
'main.c', dependencies : gtkdep)

This is a snippet from meson tutorial, and that’s why I’m still using make
(and not cmake) in all my personal C projects. I have no idea what flags will
be passed to gcc and how to change them (-mmsbitfields was required on windows
for gtk). Second, I may have no pkg-config environment when I link windows
executable against msys2-installed libs in plain cmd. These shorthands may be
a long-term win for a regular project in strict unixlike or msvc-env.bat, but
it is not a build system. It is a fixed recipe book (as seen from the tutorial
page, don’t take it as a criticism). It substitutes a simple knowledge on -I,
-L and -l with a cryptic set of directives. You spoke gcc very good, now you
have to speak meson’s local dialect and be able to catch and fix subtle errors
in translation.

The problem is, one has to investigate into seemingly-easy build system to
tune it to their needs. It is much harder than just fixing CFLAGS+= or
LDFLAGS+= in a Makefile.

For me, a better build system would look like a set of rules, but not in a
Makefile, but in a general-purpose language, like:

    
    
      var all_srcs = qw('a.c b.c')
      if_(files_changed(all_srcs)).do_(changed_srcs => {
        changed_srcs.forEach(src => compile(src))
      })
      fn compile(src) {
        exec('gcc', CFLAGS, '-o', to_o(src), src)
      }
    
      if_...
    

This simple process would cover 99% of common cases and you’re still in
control of everything. Just prepend:

    
    
      if (os == 'msys2') {
        CFLAGS += ' ...'
      }
    

And that’s it. It is a long-term usability, because you may open this file a
year later and still figure out what needs to be done in few seconds.

~~~
wwright
I'd recommend you take a look at how Bazel works (Meson may be similar if you
look further, but I haven't used it much myself). The default interface you
get is relatively "high-level", but everything behind the scenes is a general-
purpose system like what you describe, and you can customize it pretty deeply.

What makes it _really_ great IMO is that the language and tool are designed
for best practices. For example, your scripts can't actually execute anything:
they can tell the build system what command would be used to build the file
and what the dependencies would be. The sandboxing allows the build system to
be pretty hermetic without much effort. This means that it can always
parallelize your build, and incremental builds are always fast and correct.

------
jnxx
> Perhaps the best known example of this kind of tool is Make. At its core it
> is only a DAG solver and you can have arbitrary functionality just by
> writing shell script fragments directly inside your Makefile.

I am still fascinated by redo
([https://redo.readthedocs.io/en/latest/](https://redo.readthedocs.io/en/latest/)),
which provides exactly the same purpose as Make, but turns its interface
inside out: Make is a DSL for describing an acyclic dependency graph, with
attached command lines to compile, link, install or run stuff, with a minimum
amount operations. The fact that Make is not language-specific, and one just
uses shell commands to build stuff, makes it versatile. But there are edge
cases which have the effect that people often to prefer to do "make clean",
because dependencies might not be captured completely.

"redo" is internally more complex, but basically it is a set of just ten shell
commands which are used declarative to capture dependencies, and are part of
build scripts. For example,

    
    
        #!/bin/sh
        redo-ifchange /usr/local/include/stdio.h
        redo-ifchange helloworld.c
        gcc -I/usr/local/include helloworld.c -o $3
    

has the effect to rebuild a program if the file "/usr/local/include/stdio.h",
or the source file "helloworld.c" was changed - be it by the programmer or by
a system update. That's it. No "clean" command is needed.

The result is, in terms of user interface and usage, fascinatingly simple (I
tried it with a ~ 20,000 lines C++ project which generated a Python extension
module using boost.) The lack of special syntax needed is just astonishing.

But I wonder how well it would run with all the additional complexities like
autoconf and autotools - knowing that the all complexity of these tools is
usually there for a reason.

~~~
Nullabillity
If you trust yourself to capture the dependencies totally then it doesn't
really matter whether you use make or redo, you could add the stdio.h
dependency to either. But the only way to actually achieve reliable clean-less
builds is to run it in an environment where you either take away all access to
non-dependencies (like Nix[0]) or automatically record all accessed files
(like Tup[1]).

GCC also supports emitting header lists in a format that Make understands, but
that won't cover non-GCC targets, or be as comprehensive as doing it in the
build system.

[0]: [https://nixos.org/](https://nixos.org/)

[1]: [http://gittup.org/tup/](http://gittup.org/tup/)

~~~
jlokier
GCC actually gets it wrong, as does almost every compiler and makefile
combination.

Consider this realistic example. Realistic in that I've encountered it in the
real world and seen buggy demos shipped because of it, with non-reproducible
bugs.

We have a search path for headers, like so:

    
    
        -Imyinc -I../dep1/inc -Idep2/inc -I/usr/include
    

We compile a source file which contains:

    
    
        #include "creative.h"
    

GCC outputs a dependency which is read into the Makefile next time:

    
    
        object.o: ../dep2/inc/creative.h
    

Looks good! Later that day we update from our upstream dependencies:

    
    
        git pull
    

Which adds a file ../dep1/inc/creative.h

Recompiling, the project works:

    
    
        make
        ...
        make test
        => 144,123 pass, 0 fail - well done, you earned a coffee!
    

We captured automatic dependencies, this should be good to ship. But no:

    
    
        make clean
        ...
        make
        ...
        make test
        => FAIL!
    

What happened? Turns out someone upstream moved (by copying) creative.h to
../dep1/inc and forgot to remove the old copy in ../dep2/inc. Then someone
edited a data structure in that file. Automatic dependencies gave everyone
false confidence in the build results seen by different people.

That kind of automatic dependency is insufficient because it doesn't capture
"negative file results" during path searches, where output depends on a file
_not existing_ , and changes if a new file is created.

It is possible to use Makefiles with this handled accurately in the automatic
dependency tracking, but I haven't seen it done often.

An equivalent problem occurs in caches which automatically track data
dependencies used to generate results. For example web pages generated from a
combination of files, templates and data, build artifacts such as container/VM
images, or at a much lower level, data dependencies inside calculations with
conditional branches. To do caching right, they must validate negative tests
as well as positive that arise in searches. I've seen a surprising number of
cache implementations which don't try particularly hard to get this right.

------
lcuff
Context matters, and context changes over time, making the whole process more
difficult. When Apple was exploring how to build the Lisa (their first GUI
system) they did user testing on both a single button mouse and a multi-button
mouse. New users were confused by a multi-button mouse, so Apple went with a
single button mouse. But as the years go by, and users get more sophisticated,
it's clear that multi-button mice are better. There's no clear right choice
that spans time. How fast are users going to come up to speed? What level of
difficulty are you foisting on new users? Will they persist through that
difficulty?

~~~
RangerScience
Take a queue from games: Gradually unlock more features as people use the
existing ones.

------
Tainnor
I don't know. There is some truth to this, but it's the same kind of argument
I've heard from Rails people, EmberJS people, maven people, and many others;
it's the famous "you're holding it wrong" argument.

Sometimes you really are the best judge of what you need, and when the system
works against you because for whatever reason the designer of the original
system thought your problem is not an actual problem, you have to resort to
weird monkeypatches or weird workarounds.

I would be fine with the "this problem has been solved" argument if we could
all agree about objectively right solutions, and in some cases we can (e.g.
you don't want to handroll your own crypto), but more often we just don't.

------
ssivark
I like the two graphs to organize the conversation. IMHO the key to success
(max bang for the buck) is better composability and as little boilerplate code
as possible. Haskell and Julia (for example) do this fantastically well.

The essential theme of long term usability is very close to what Guy Steele
talks about in his fantastic talk/article _Growing a language_ — you want as
little as possible embedded into the language, and you want as much as
possible farmed off to the libraries so that users can compose the pieces they
like without drowning in too much code.

------
m463
You could make the same argument for presales vs postsales.

There are numerous examples of this:

The look of the apple keyboard in a store, vs day-to-day functionality (or
admittedly many apple products, such as a glossy display)

Any RGB product for sale now -- RGB keyboards, RGB mouse, RGB computer, RGB
system memory. (get it home, turn it off)

meanwhile, a trackball or weird vertical mouse might be completely
unapproachable, but for the folks who need them they are usable forever after
putting in the time.

------
chewxy
Maximal cooperation?

Two schools of thought:

1\. Maximal cooperation require maximal composability

2\. Maximal cooperation requires some dictatorship

Interestingly I find that this is quite embodied in two of my favourite
languages - Go and Haskell.

Haskell is all about composition. And it does so by dictatorship of its type
system.

Go is dictatorial in the sense that there is one way to do things, but
encourages compositionality, coming from a unix world where pipes are
everything.

------
sanxiyn
I consider this a fundamental insight of Rust programming language: that it is
a programming language optimized for long term usability.

No wonder it is struggling for adoption.

~~~
MaxBarraclough
I wouldn't say Rust is 'struggling'. Compared to other languages, like say D
and Nim, it's doing well.

No language could possibly topple its incumbent rivals overnight. To do this,
it would have to have great advantages over existing languages, _and_
excellent interoperability, _and_ an approachable learning-curve. That's
essentially a contradiction. If your language offers a new and better way to
do things, it pretty much _must_ be unfamiliar to those who use older
languages. If the concepts involved were similar, you'd be releasing a
library, or publishing a compiler optimisation, rather than developing a whole
new language.

Perhaps that's a bit of a generalisation though. TypeScript isn't doing
anything new, it's just adding an old and familiar feature (static type
checking) to an old and familiar language that lacks it. In TypeScript's case,
the feature isn't ground-breaking, but it's valuable enough that it may be
worth the pain of using a different language.

~~~
Symmetry
It seems like C++ offered all of "it would have to have great advantages over
existing languages, and excellent interoperability, and an approachable
learning-curve" and worked hard and made whatever compromises were needed to
do it. Which is probably why it took off relatively quickly.

