
The best interface is no interface - potomak
http://www.cooper.com/journal/2012/08/the-best-interface-is-no-interface.html
======
fatbird
I have two criticisms of the article:

1\. Every story is a 'just so' story where the way the system works is exactly
the way the user wants the system to work. Great, it's like putting a button
front and center on your app and the user wants to push that button and look!
It's right there! Awesome. Except I don't want to push that button, I want to
push the other button that's now hidden away because the designer is striving
for "No UI". The new minimalist interface is now actively fighting me. What if
I don't want my car to unlock when I approach it?

2\. Using AI is a "step 2: ???" solution, and the flaw with it is best
exemplified by the columnist who bought a pregnancy book for a pregnant friend
on Amazon... and now his Amazon suggestion stream is now still filled with
baby clothes that are age appropriate for his friend's baby, years later. For
every adaptive step the non-interface takes on the basis of past behaviour,
it's a step away from future behaviour that's different than past behaviour.

I suppose my criticisms come down to the fact that the article doesn't seem to
acknowledge that design trade-offs are a normal part of interface work. It
strongly implies there's a hallowed land where every use case is obvious and
accounted for and we all just "do". Annoying, in the same way that we all get
annoyed when another framework comes along and is in its silver bullet phase
where it's the awesomest solution to everything you need it to do.

~~~
jbrennan
>2\. Using AI is a "step 2: ???" solution, and the flaw with it is best
exemplified by the columnist who bought a pregnancy book for a pregnant friend
on Amazon

Flaws like this might exist right now, but that's really a "bug". It's a
failure on our part (the software designers and engineers) to build a system
that _actually_ works _for the human_. So sometimes when we try to make our
software smart, it ends up being [maddeningly] stupid. But again, this is only
because we don't do our jobs right, not that they _can't be done right_.

So with a little more smarts from us, we could make a system that's a lot more
flexible and allow for these sorts of things. We have to acknowledge that a
user shouldn't have to fit our system exactly and we should allow for "Buying
a friend some baby clothes but not have it think we have a child" cases by
being smarter about how the system learns. We need to provide it with better
context.

~~~
fatbird
Doesn't "provide better context" just push the problem one turtle away? How
could I communicate to the website that the item for which I'm shopping is for
a friend, not for me, and that I don't want the current actions to figure into
future adaptive behaviour? Perhaps a checkbox? But now we're adding interface,
which we were trying to eliminate in the first place.

I suggest that this is an intractable problem just because there's a
fundamental tension between ease and flexibility. 'Optimize for the common
case' always seems like a good idea (and the AI suggestion is just 'allow past
history to determine the common case'), but we all have different common
cases, and this heuristic helps us not all when we don't want to execute the
common case, which is the hardest part of the design problem anyway.

I agree with the idea, generally, that we need to be smarter about how things
work for people. I think "No UI" is like "No SQL": An appropriate solution in
a certain set of cases, but nothing like a general solution for all.

~~~
jbrennan
>Doesn't "provide better context" just push the problem one turtle away? How
could I communicate to the website that the item for which I'm shopping is for
a friend, not for me, and that I don't want the current actions to figure into
future adaptive behaviour?

That's part of the implementation, again, I think. It would be silly for the
implementation to treat just once instance of a behaviour so seriously. If the
user buys it once, but then never shows any interest in such products again,
then they should slowly be ranked less and less relevant. If the user instead
continued to purchase or look at baby items, then the system could have more
certainty of desire in them. Again, it's not just the products themselves, but
the behaviour surrounding them. Temporal context is important, too.

I agree fully that just heaping on more interface for "special cases" is a bad
idea. We just need to be smarter about...how our software is smart.

------
DenisM
I found the article somewhat light on useful examples; however the premise is
very interesting, so I would like to bring a couple of my own examples to _the
table_ :

In a restaurant named "The Herb Farm" in Seattle area you don't pay when you
eat, you pay before the dinner, when making a reservation by phone (the menu
is prix-fixe). So when you're done eating you just walk out - no waiting for
the check, no computing tips, none of that nonsense that should not be part of
a pleasant dinner. It's incredibly liberating, in a way I would never
understood if I was only _told_ about it.

For another example, when you arrive to the restaurant "Canlis" they help you
out of the car, and then you walk right into the place. When you're done
eating, you walk right out and your car is already back there. They spot you
on the way to the door and reshuffle the cars so that yours is at the front.
So you just walk out, get in the car and drive off. As it turned out,
searching for a place to park, parking, pocketing the keys etc etc is a huge
mental overhead. I only realized that when I was liberated from it. The
typical "valet" service actually does nothing for me - there is still overhead
of asking for your car, waiting for it to be brought, tipping the valet. Meh.

Both places are charging a pretty penny for their services, so they can afford
the "luxury" of good service, including good wages to the employees who make
it possible. Sadly, neither does both of these things, which is still an open
niche in Seattle dining. But more to the point at hand: where the restaurants
make good service at cost of quality labor, computer systems could do the same
at cost of good engineering, that is remove unnecessary delays and accidental
complexity.

~~~
MatthewPhillips
Your first example doesn't make sense to me. Why is the restaurant optimizing
for the _leaving_ experience? They should be optimizing for the eating
experience. Like deciding to get an extra appetizer because the people you're
with got one that looks really good. Or deciding to break your diet and get
the dessert anyways.

~~~
DenisM
It's prix fixe, the dinner consists of 14 courses and lasts over 3 or 4 hours.
You don't get to chose food any more than you get to chose music. There's an
exception for extra wine (that comes with a bill), but few people take it as
what's sever is plentiful and well mar matched.

And there is plenty of interaction with people about food, just none about
things that aren't food.

------
Symmetry
While having no interface is a nice idea, I think that there's also something
to be said for "Make common things easy, rare things possible." That is, it's
great to try to eliminate the need for an interface, but you can't assume or
even expect that you'll always succeed, so you have to have an interface
anyway.

Also, a lot of famous security vulnerabilities like the fact that Windows will
execute things on a memory stick without being asked, are the result people
trying for no interface. Having merchants billing you without you consenting
seems really, really sketchy and maybe the video addresses this in a way that
would satisfy me, but I'm sceptical.

~~~
DenisM
Re payments - consider situation where I keep going to the same coffee shop,
and I just tell the barista to "add it to my tab" and then drink my coffee and
leave. That would be very liberating, and I know the barista personally I
would have no problem for them to keep the tab for me, and I would close it
weekly or monthly.

Not so crazy now, iz it? If they could make it stretch to a point where I know
the business but not the person working there, I'd be fine with it as well.
And the system could learn typical usages, and close the tab for me
automatically, or bring it up for review if needed.

~~~
LockeWatts
_And the system could learn typical usages, and close the tab for me
automatically, or bring it up for review if needed._

This sounds like an awful idea. I don't trust another developer to accurately
be able to predict how I use my money.

What if someone needed to pay a bill with that money this week, and then can
pay off their tab next week. But your AI decides to pay off the tab, you have
no way of getting the money back, and now your house has no power.

Or maybe this is only a system designed for upper middle class people who have
floating money to do that with. Seems a bit short sighted to me.

~~~
DenisM
Every system is designed to an audience, that's not shortsightness, that's
called focus.

------
calinet6
Minor details aside, this is a spot-on analysis in the grand scheme of things.

Computers should be doing more for us. They're smart, they're good at logic,
they can make decisions. It's our job as programmers to make the do more and
allow us to do less.

I'm tired of complexity. I don't _want_ a car that has a touch screen—I want
one with a knob that's tactile and has a blue-to-red gradient that makes sense
and only controls the thing it looks like it controls. I don't want to think
about it. I don't want it to take three steps and two cluttered screens.

And if my fridge is going to be smart, I want it to be smart about being a
fridge. I want it to do one thing smartly: make things cold. If it's going to
do something else smartly, I want it to be relevant to keeping my food, so I
don't know, figure out when I need new milk by allowing me to scan the barcode
when I buy it. Then how about making it available on my phone so I can answer
the age-old question "Do I need to buy milk?" when I'm at the grocery store.
That might actually be useful.

But for pete's sake, if it has twitter, I will not buy it.

Computers should simplify our lives. If it adds complexity, screw you, start
over and try again.

~~~
moe
_I don't want a car that has a touch screen_

Hell yes. Whoever decided to put touchscreens into cars deserves the Darwin
Award.

We live in a bit of an unfortunate age where perfectly time-tested interfaces
are replaced with inferior touch-controls left and right, just because "we
can".

This applies even to the canonical application; the phone. It's great how many
things we can do on our phones now, but the core feature (telephony) has
suffered badly. I can't take a call without looking at it anymore, much less
place one. In the winter every incoming call turns into a little challenge
(how quickly can I rid of that glove without dropping my precious $600 slab-
of-glass onto the icy concrete?)

Rewind to the 1990s. Many people could tap an entire SMS on their Nokia
without taking it out of the pocket. We could dial numbers without looking
because we knew a contact is "four taps down" in the address book, and the
buttons gave us a reassuring "click" when they were pressed.

The industry needs to re-discover tactile feedback and predictable latency as
desirable traits. Early androids had a jog-dial (sony) and dedicated camera-
buttons (HTC), but they largely disappeared for stupid reasons.

I really can't wait for Apple to re-"invent" physical controls in one of their
future models. Perhaps the telephony-experience on our expensive pocket-
computers will then finally catch up to what we had 20 years ago...

~~~
cdcarter
Exactly. Mackie is now making a little audio console that works with your
iPad. Plug in your microphones and amps, slide in your iPad, and run it from
there. The problem? There's no haptic feedback at all, nor any memorable
locations! If a horrible noise starts being emitted, you can't just reach at a
known area and slam down a master, you need to make sure you're on the right
page on the right mix and then find it, and hope your touch latched.

------
jrockway
The Google Wallet flow they describe is not correct. All you need to do is
have the screen on (not unlocked) and hold the phone near the NFC reader. If
you're not recently-authenticated, you need to type your PIN. That's it.

You do not need to unlock your phone or navigate to the Wallet app, and you
don't need to select the credit card to use at payment time. Also worth noting
is that tap-and-pay works even without a data connection.

The real lessons to learn from this are: people are paranoid about paying for
things ("how will my phone know to make a payment if I'm not in the app?"),
and people don't read documentation (the first few times you use Wallet, it's
explained exactly how you make a payment).

One last thing to think about: creepiness. As a society, we have the
technology to predict exactly what you are going to buy and when, and we can
use cameras to recognize your face. So if you usually buy a latte every
morning, the coffee shop could just make it in advance, and you could walk
into the store and pick it up. The security tape would see your face picking
up your coffee, and automatically deduct the money from your account. But I'm
guessing that the HN crowd, despite their desire for convenience and
technology, would _hate_ that for privacy concerns. Do you really want your
coffee shop tracking your every move? Who will they share that information
with?

(Why is the complete lack of an interface creepy? Because nothing else we do
is completely lacking in interface; usually you do something to get a result
-- doing nothing to get the same result is weird.)

~~~
ricardobeat
There is no need for the creepy camera - this can be done with Bluetooth 4 +
geolocation (like Square does).

~~~
jrockway
Everyone has a face. Not everyone has a phone with Bluetooth 4, a full
battery, and a location fix.

(There's a reason why we carry around plastic cards for paying with things.
They're cheap and simple.)

------
numlocked
This is a well-trod point, but one that's always good to be reminded of. A
similar argument is made in The Design of Everyday Things - if your interface
needs an instruction manual, even if it's only one word (for instance the word
"pull" on a door) then the interface is not doing its job.

I'd highly recommend DOET for anyone interested in this sort of thing.

------
mojuba
Worst interface is one that's trying to learn about my habits without having a
broader knowledge about my personality and the world as a whole. I don't want
my axe to adapt to my hand and to my way of using it. Most of all I want my
axe to be reliably predictable.

So no, thank you, no self-learning climate control systems, microwaves or lawn
mowers.

~~~
jrajav
Wouldn't you want to own a Roombamower that could automatically mow your lawn
after a few times around manually? Or a microwave that could recognize types
of food with an internal camera, and suggest optimal cooking times based on
total times you've done in the past?

Maybe you'd rather stick with what you're comfortable with right now, but
don't count out great ideas that you might not have thought of yet.

~~~
ProblemFactory
The problem with interfaces that learn from user behaviour is that they have
to be almost perfect to be usable.

Simple, predictable interfaces that the user can understand and control are
okay. Self-learning interfaces that do what the user wants are okay. But there
is a gap between the two with poor (and even good but imperfect) learning
interfaces.

A microwave that sets the timer itself would be great - unless it accidentally
sets 5% of the meals on fire. Even a small failure rate is very frustrating,
as the user cannot accurately predict and control their behaviour.

(It might be fine for a microwave - you can see that 20 minutes for a pack of
popcorn is wrong, and cancel - but the same issues appear in many other
interfaces.)

~~~
VLM
"The problem with interfaces that learn from user behaviour is that they have
to be almost perfect to be usable."

You listed some practical engineering issues.

I can list numerous psychological issues, such as anxiety knowing the failure
rate will probably be much worse than mine, the anxiety of not knowing how
much extra time I'll have to spend on rework. Uncontrollable failure is
stressful. Then there are more attitude level issues with being active "I'm
making popcorn" vs passive "The microwave is making popcorn". This is before
we get started with complacency "The microwave is really smart, so I'll let my
kids use it. Whoops house burned down after 20 minute popcorn setting ...
Guess I should feel guilty, or someone should anyway"

------
kalleboo
Is that list of steps to use Google Wallet in this article correct? If so,
that wasn't the promise of NFC at all! I have an Android phone on which I use
the legacy Japanese NFC system which doesn't require waking up the phone at
all (it even works if the battery is depleted).

He says that tapping a device against another one is undesirable, but I think
people like that kind of "I have to do _this_ for money to disappear out of my
account" reassurance.

~~~
SolarUpNote
I can picture it now :D

Me: "This isn't what I ordered."

Cashier: "It's what you ordered last time. We went ahead and made it for you.
And we also charged your account. Aren't you delighted?"

Me: "But I just wanted a coffee. Now my account is overdrawn, and you just
cost me a $35 overdraft fee."

~~~
diminoten
Of course the decision point still needs to be maintained - that is, the human
should retain the trigger on a predictive transaction unless they explicitly
give it up, such as in recurring payments or that XKCD $1 bid bot! :-P

------
aleem
While no interface is better, it is not always the simplest or the most
utilitarian.

The Mercedes proximity based, keyless entry system is actually a complex
digital abstraction over a mechanical key/lock.

Mercedes has to address a bunch of security concerns such as preventing an
adversary from sniffing my key information through my jacket pocket, digitally
cracking codes, etc. Since the system tries to protect you from locking your
key in the car, more technological components need to be thrown into the mix
to detect if the key is inside the car. If the driver has to reach into his
pocket to turn on the ignition it would defeat the purpose of going "keyless"
so presumably the ignition system also gets a few layers of complexity. The
cost of whole system would also go up. Repair and maintenance don't sound so
appealing either. The whole thing can fail in a lot more ways.

I am not inferring it's a bad idea. It very well might become commoditized
technology some time in the future and pave the way for other interesting
possibilities.

I prefer utilitarian design which is more concerned with simplicity through
and through rather than just minimizing the footprint for the user interface.

------
lhnz
The title is wrong. I understand how the development community has gotten hung
up on the absurd GUIs and CLIs we've had to use but that's not what the word
interface means.

    
    
        Interface: A point where two systems, subjects, organizations, etc., meet and interact.
    

A door handle is an interface, a burglar alarm is an interface, etc. The term
you're actually looking for is "invisible interface" instead of in-your-way
interfaces. But if you wish to have the ability to interact with a system, you
cannot remove its interface...

------
seanica
One thing that mustn't be over-looked with interfaces that 'learn about your
behavior' is they can lock into a 'local maxima' and can be difficult to
retrain without resetting to factory defaults. - If your lifestyle changes,
can the interface keep up?

~~~
jbrennan
That could happen, but really that would be a "bug", not an inherent problem
with the design. That would be the developer's job to fix.

~~~
seanica
I disagree that this would be a bug. It's a design flaw that cannot be
corrected by fixing bugs.

It's analogous to security flaws. If there is a flaw in the design, no amount
of bug fixing will make the system secure, unless that 'bug fixing' changes
the design.

~~~
jbrennan
Can you explain why you think it's an inherent flaw in the design?

The way I look at it is, yes the software should keep a history of user
behaviour and base its actions off that, but there _must_ be feedback
involved, either explicit or implicit. This way, if I gave some input to the
system once but then never did so again, the likelyhood that one event should
affect the future would diminish over time.

There could be trickiness around "Bubbles" (like a Search bubble, where it
only recommends to you things it thinks you'd like, and _never_ shows you
other things). I think those are problematic and should be dealt with. But I
don't think that means it's impossible to fix. It's just something that needs
to be thought through. I don't have an answer for it right now but that
doesn't mean there isn't an answer.

~~~
seanica
> It's just something that needs to be thought through.

Your statement is what I mean. "Thinking things through" should be done during
design. Once you have built system, its much harder to compensate for design
flaws.

Programming is not designing. Designing is not programming. Fixing bugs is not
designing.

You have to design into the UI system a means for it to compensate for changes
in user behavior. You don't want a system that takes many uses to train. At
the same time you don't want a system that is trained by a single use. For me
this is the crux of the problem.

The happy medium that automatically detects deviations from a user's 'normal'
behavior _and_ takes the correct action is very hard to design, as it involves
AI fuzzy logic.

~~~
jbrennan
>Your statement is what I mean. "Thinking things through" should be done
during design. Once you have built system, its much harder to compensate for
design flaws.

I agree with this fully. This is something that would need to be solved before
building such a system.

A great example of a system which learns from history, but which also supports
_changes_ in behaviour is:
[http://worrydream.com/MagicInk/#engineering_inference_from_h...](http://worrydream.com/MagicInk/#engineering_inference_from_history)

The linked example is specific to one application, but he continues to detail
what he thinks would make for a general solution.

------
jbrennan
This whole thing sounds an awful lot like Bret Victor's “Magic Ink” paper
(that's a compliment): <http://worrydream.com/MagicInk>

If you like the ideas in the OP, you owe it to yourself to chew through Bret's
paper. A lot of the same ideas, expanded and thought through.

------
zdw
The best interface is hiding all the steps of a complex process and saying you
did away with the interface.

No steps = easy to debug when it goes wrong. You just point at it and loudly
whine: "It's not working!" then nobody fixes it because all the back end is
"magic".

~~~
kapnobatairza
One things that came to mind when reading your comment was some of my
frustration using OS X. When an app in OS X or OS X itself stops working you
don't get a blue screen or a "this application has crashed" error - it just
stops working. Many of my friends would have the false illusion that OS X is
much more stable than Windows 7 because they see less error messages.

~~~
superuser2
I used to see the "spinning beachball of death" scenario every now and then,
but on Lion and a 2011 Macbook Air I haven't once. Occasionally third party
applications will exit and there will be a "____ has quit unexpectedly"
message with an option to relaunch.

Details are available in logfiles; anyone who has a use for those details most
likely knows where to find them. The end user can't do anything with memory
addresses, etc. so it makes sense that they don't seem them.

~~~
kapnobatairza
Of course, Apple provides the details for those who care to look, just that it
hides the information from the everyday user who doesn't want to. This is just
a personal opinion, but just failing is probably less scary to the everyday
user than being immediately presented with a bunch of error messages/a blue
screen error message - even though they might mean the exactly the same thing.

------
scrrr
As far as ergonomics I always liked the Mercedes door handles better than
those of some other cars, where you can only open the door by gripping from
underneath. Interface design and ergonomics go hand in hand.

I think the iPhone is a good example with its one button design and size, as
opposed to clunkier cellphones with 3 or more buttons.

~~~
lake99
1\. The iPhone does not have just one button. It has several other buttons at
the side, and it has a GUI.

2\. The one-front-button deal is one of the many things I hate about iPhones.
The "go back" operation is something we do often on smart phones. I want my
phone to have that at the front. I also like trackballs there.

~~~
jrajav
I often cite this as one example of how I find the Android interface more
efficient and usable. "Go back" applies intuitively to such a broad range of
apps, and benefits enough from a constant, ergonomic placement and tactile
feedback, that that alone makes a difference in feel.

~~~
pbreit
I hate, hate, hate that Android doesn't have a physical "home" button and
instead those terrible soft system buttons. First, I pretty much never know
which button to hit and what's going to happen. 2nd, they waste valuable space
being unnecessarily "always on". And my kids are flat out unable to use my
Nexus 7 without constantly accidentally hitting this buttons. Yuck, yuck,
yuck.

------
benesch
Prior discussion:

<http://news.ycombinator.com/item?id=4616945>

------
gimbuser
Designers often forget about this in their urge to overdesign and show-off.
That's probably why you should have a strictly UI/UX person on your team, who
can say no and strip the clutter.

------
VLM
The problem with "AI" and endusers is we humans are flawed enough to have a
large and only semi-effective science and industry focusing on what amounts to
interface failures between "intelligences". Abnormal psych, couples
counseling, that sort of thing.

True, you do need to worry about the engineering and stats and this might be
close to solved for some trivial problems.

Yes you also need to worry about the interactions between AI and "normal"
people and this is nowhere near solved even for trivial problems but its been
slowly improving for decades.

The biggest problem is debugging interactions between AI and "AB-normal"
people. How should the AI react when rubbed up against a OCD person, or a
psychopath, or a developmentally disabled enduser?

This I believe to be the fundamental failure mode for AI in enduser products,
probably enforced by the greedy legal system. If you ignore the most
vulnerable members of the population you knowingly released a product that
kills them, thats not going to turn out well. Or you can hyperoptimize it such
that your lawnmower is better at dealing with psychopaths than the smartest
human, in which case its hyperregulated by the medical system up to
unaffordable cost.

------
Proleps
I don't like the idea of AI and a computer in everything I own. Who
stores/owns all that information? It also seems like a completely unnecessary
security risk(people spying on you by hacking into your fridge :P).

The solution is a lot simpler, don't make me use a computer for everything. I
can open my car with my key, Pay using money(or even a bank or credit card).

Twitter in you're car and Apps on the fridge only exist because of the App
hype. I don't think they will last.

------
gbadman
In other words, the best UI is AI.

~~~
AndrewNCarr
Reminds me of this: <http://www.youtube.com/watch?v=aXV-yaFmQNk> (A Magazine
Is an iPad That Does Not Work)

Not that I disagree, it is just fun to think about possible unintended
consequences of dependency on AI for every day tasks. I am sure there are some
short stories that deal with people in an advanced civilization losing their
automation, but I can't seem to track any down on Google at the moment.

------
blueprint
[http://www.cooper.com/wp-
content/uploads/2012/08/nouisystems...](http://www.cooper.com/wp-
content/uploads/2012/08/nouisystems_goldenkrishna.png)

Yeah, right. At some point on that curve the UI would grow arms and make me my
favorite breakfast every morning. Objects in the world are innately limited by
the causes they have in their origin. An pear tree can only ever produce pears
unless what are encoded in its seed are changed.

------
namank
The core thought is of value and basically the direction in which HCI is
headed.

Other than that, wow. As a UX designer, I would expect the author to show more
critical thinking when evaluating the interfaces like a car dashboard and
refrigerator. Put those into context as you conveniently do with interaction
patterns like opening car doors and paying with ewallets.

------
lispertoascheme
From August of this year, but still nice to see this again.

I don't even need to read the blog post. The title says it all. djb wrote
about this in the docs for qmail many years ago.

My idea of a great "user experience":

I switched it on/started it up, it did what it's supposed to do, in a
predictable span of time, without asking me questions or requiring me to
fiddle with anything.

------
jlinowski
My response to the article: [http://wireframes.linowski.ca/2012/12/calling-
your-bull-the-...](http://wireframes.linowski.ca/2012/12/calling-your-bull-
the-best-interface-is-no-interface/) I think it's stretched. Interfaces still
have good characteristics.

------
Zash
Do not underestimate the power of the command line.

------
rjbond3rd
Tiny point: the article incorrectly implies chkntfs and atmadm were original
early 1980's DOS commands, but they were circa 1990's.

~~~
npsimons
Not to mention that it equates _all_ CLI's with that horrible, no-tab-
completion abomination that was the DOS command prompt. Sure, the commands in
UNIX (and variants) were cryptic, but at least you could have real filenames
(not that 8.3 nonsense) and do real programming with the shell, pipes, job
control, and of course the already mentioned tab-completion. If you don't
understand why some people _still_ love the CLI, you probably haven't used a
good one, or don't understand the power a good one gives you. DOS CLI was to
UNIX CLI what Win95 was to OS/2 multitasking.

------
usablebytes
Good article. Not sure why I feel I already read that stuff somewhere.

------
devb0x
Lost interest half way..

