Hacker News new | past | comments | ask | show | jobs | submit login
The best interface is no interface (cooper.com)
215 points by potomak on Dec 23, 2012 | hide | past | web | favorite | 91 comments

I have two criticisms of the article:

1. Every story is a 'just so' story where the way the system works is exactly the way the user wants the system to work. Great, it's like putting a button front and center on your app and the user wants to push that button and look! It's right there! Awesome. Except I don't want to push that button, I want to push the other button that's now hidden away because the designer is striving for "No UI". The new minimalist interface is now actively fighting me. What if I don't want my car to unlock when I approach it?

2. Using AI is a "step 2: ???" solution, and the flaw with it is best exemplified by the columnist who bought a pregnancy book for a pregnant friend on Amazon... and now his Amazon suggestion stream is now still filled with baby clothes that are age appropriate for his friend's baby, years later. For every adaptive step the non-interface takes on the basis of past behaviour, it's a step away from future behaviour that's different than past behaviour.

I suppose my criticisms come down to the fact that the article doesn't seem to acknowledge that design trade-offs are a normal part of interface work. It strongly implies there's a hallowed land where every use case is obvious and accounted for and we all just "do". Annoying, in the same way that we all get annoyed when another framework comes along and is in its silver bullet phase where it's the awesomest solution to everything you need it to do.

>2. Using AI is a "step 2: ???" solution, and the flaw with it is best exemplified by the columnist who bought a pregnancy book for a pregnant friend on Amazon

Flaws like this might exist right now, but that's really a "bug". It's a failure on our part (the software designers and engineers) to build a system that actually works for the human. So sometimes when we try to make our software smart, it ends up being [maddeningly] stupid. But again, this is only because we don't do our jobs right, not that they can't be done right.

So with a little more smarts from us, we could make a system that's a lot more flexible and allow for these sorts of things. We have to acknowledge that a user shouldn't have to fit our system exactly and we should allow for "Buying a friend some baby clothes but not have it think we have a child" cases by being smarter about how the system learns. We need to provide it with better context.

Doesn't "provide better context" just push the problem one turtle away? How could I communicate to the website that the item for which I'm shopping is for a friend, not for me, and that I don't want the current actions to figure into future adaptive behaviour? Perhaps a checkbox? But now we're adding interface, which we were trying to eliminate in the first place.

I suggest that this is an intractable problem just because there's a fundamental tension between ease and flexibility. 'Optimize for the common case' always seems like a good idea (and the AI suggestion is just 'allow past history to determine the common case'), but we all have different common cases, and this heuristic helps us not all when we don't want to execute the common case, which is the hardest part of the design problem anyway.

I agree with the idea, generally, that we need to be smarter about how things work for people. I think "No UI" is like "No SQL": An appropriate solution in a certain set of cases, but nothing like a general solution for all.

>Doesn't "provide better context" just push the problem one turtle away? How could I communicate to the website that the item for which I'm shopping is for a friend, not for me, and that I don't want the current actions to figure into future adaptive behaviour?

That's part of the implementation, again, I think. It would be silly for the implementation to treat just once instance of a behaviour so seriously. If the user buys it once, but then never shows any interest in such products again, then they should slowly be ranked less and less relevant. If the user instead continued to purchase or look at baby items, then the system could have more certainty of desire in them. Again, it's not just the products themselves, but the behaviour surrounding them. Temporal context is important, too.

I agree fully that just heaping on more interface for "special cases" is a bad idea. We just need to be smarter about...how our software is smart.

The funny thing is, in Amazon's case, there are multiple ways to do that - tell the website you're buying something as a gift during checkout, or when you get bad recommendations, click 'fix this recommendation'. Of course, even though I work there * , I never remember to do the former, and usually can't be bothered to do the latter ... and I reckon the majority of people don't know that either option exists (in a couple of cases I've even had people ask me why the option didn't exist!)

It's a pretty good case study: even though the UI is there, front and center (the 'fix this' link is usually right underneath recommendations), it doesn't get used because most users would rather do 'Step 1: get good recommendations' rather than 'Step 1: get mixed-quality recommendations; Step 2: take action to improve recommendations; Step 3: get good recommendations'.

* And, of course (standard disclaimer) I don't speak for or represent Amazon in any way...

> But now we're adding interface, which we were trying to eliminate in the first place.

camtarn already explains how you make this part of the natural interface. I'm going to address the apparent confusion you have in assuming that in removing the interface, you do not need to eliminate it. For example, in the article, the discussion about approaching the car should open the door does not mean that you remove the interface of lock and key.

> An appropriate solution in a certain set of cases, but nothing like a general solution for all.

And the article isn't advocating for the removal of all UIs. It even goes on to advocate certain UIs. Don't just read the headline. Take a few minutes and consider what's being said.

>> the columnist who bought a pregnancy book for a pregnant friend on Amazo

> that's really a "bug"

This thinking worries me a lot, especially when applied to what Google is heading for.

So now many people are removing all buttons and settings, because they think that every button or setting is a failure to guess the user's need.

The epitome of this move it probably Google Now, and I fear it is going to fail or to stink. My instance of Google Now tries to guess when I will drive back home and notify me with a traffic estimate: Until now, it was successfull and mildly useful once in 20 tries. So in fact, I have been spam-notified 19 times by Google itself! And me driving home has a relatively high predictability, compared to buying books, going out, etc.

I just read Thinking Fast and Slow for the second time, and the gist of it, in this context, is that the world is much less predictable than we think it is. If the Bayesian statistical approach or even simple formulas work better than intuition or experts in so many domains (financial, political forecasting, clinical diagnostic, etc), it doesn't mean it should be used, it means these domain have very low prediction validity. For instance, the only valid and proven prediction for next year stock price is "80% chance to be between -10% and +30% of current stock price", and this is useless common sense. Just as the only correct explanation for above the average earnings this year at any company is "below-average results last year produced lower expectations , and we are regressing to the mean".

Moreover, if some few things are to be found predictable in my life, I probably would not want to be recalled of this saddening fact. Suppose you are hiking next week-end in the mountain nearby. If a Google Now thingy tells me, two hours before leaving, that I should not forget my socks, it will give me the same feeling as a caring and annoying mother: "Yes Mom, call you there; No Mom, I'll not forget to wash me hands..." Who did like this when a teenager?

So, either personal life events are not predictable, or they should not be predicted because most people would rather keep the illusion they are in control.

What technology need to do is to empower users with helpful (or funny) tools. When I am treking somewhere in nowhere land, I feel empowered to be able to pull a satellite map and GPS positioning from my phone, and I use it whether I'm a bit lost or I just want to check the road again. But I really really would hate it if the next Google map would detect from my speed that I am hesitating, and pop a "Just turn left after the next oaktree to get back to your car, you sucker".

Google's new trend is in parenting its user, I am not sure it is the correct move.

> the world is much less predictable than we think it is

I don't know anyone who thinks the world is predictable. Anyone. Your problem is scope. The world is not predictable. But small pieces are. And the world is made up of many, many small pieces. The challenge is discovering those small pieces. Not all are obvious.

And those pieces which we have discovered we can predict at a reasonable level, we ignore.

Take automatic doors at the entrance of stores. We approach the entrance, the door recognizes this, and the doors open. This is predication. We are predicting that someone approaching in a particular area will most likely want to enter the store, so we prepare by opening the door. This isn't 100%, but it's an overwhelmingly effective prediction.

> Who did like this when a teenager?

I'm not a teenager anymore though, so your reasoning doesn't follow. Your upset that if you are lost, and you want to find your way back, Google will, maybe, tell you?

Seems like your problem is personal, and frankly, a bit selfish. It's easily solved by not using the tools that you do no want to use.

Bret Victor's The Magic Ink gives some good starting points for applying context to make software smarter: http://worrydream.com/MagicInk/

Yep, I'd already linked to it twice already in this comment thread! I agree fully.

>It strongly implies there's a hallowed land where every use case is obvious and accounted for

That land might not exist, but for every problem we try to solve, we should search for that land relentlessly, as if it did actually exist. It's a not a framework, not means to an end, it's a goal, an end in itself.

1. What if I don't want my car to unlock when I approach it?

Disable it and choose to use the smart phone app. They are not mutually exclusive, and you've already used services just like this. Automatic doors that open as you approach them at a store? And when they aren't powered, you can still open them manually. Bad implementations won't open if they aren't powered.

2. it is best exemplified

If that's your best example, I feel bad. It's merely a flaw in the system, easily resolved. It's the powered door that won't open without power.

Designing these intelligent interactions is not easy. And dealing with exceptions is challenging. They are, after all, exceptions. But don't assume that designing these interactions should account for every exception. Rather, it should make normal easy and make exceptional resolvable.

I am wondering if the unix shell is the no UI approach to the first problem. The unix shell doesn't provide a discoverable, nice interface. On the other hand, it is a minimalist interface which caters the task "Manipulate files in a multitude of ways" very well.

A CLI is not graphical, but still a user interface - and a very opaque one, you can't use it at all without any prior knowledge. It's the complete opposite of the ideal interface in this context.

I've found the CLI on Cisco routers (and some other, non-Cisco routers) to be very discoverable. At any point, you can type a '?' (even on a half-typed command) and get a list of commands/options with descriptions. Okay, it's a summary, and it helps to have some context, but it's not opaque as a Unix command line.

Then there was Lotus, which (at the time it was written) driven by a text-based command line. Type '/', and the cursor would drop to the penultimate line; a list of valid commands/options would appear on the bottom line. You could either type the command/option, or use the arrow/tab keys to select it. And as you typed, the bottom line would change to always list valid options (or a description of what to type next, like "(type a number)").

Heck, even VMS had an extensive help system available at the command line (type 'help'---it was interactive). Please don't judge all CLIs by Unix and MS-DOS.

Don't judge all CLIs by Unix or MS-DOS.

But isn't that the point?

Being presented with a blue screen in WordPerfect is a minimal interface, but to do anything you have to learn how to use it. You have to learn how to get the menus up and etc.

And while good design is important, and tends to reduce interface clutter, it's also important to let users have easy access to the stuff they need.

Reducing the interface to what users need is a tricky skill, and risks angering power users. (Google dropping + to force inclusion is a notable example.)

With the Amazon example, just remember that they don't show you product suggestions for help you, they are showing them to sell more product.

If on average the technique works across the entire customer base, then they will apply it.

While it would certainly be better if they were more accurate in that fringe case as well, if it would take 4x the effort to eek out a 20% improvement, that might not be a good trade off for them.

I found the article somewhat light on useful examples; however the premise is very interesting, so I would like to bring a couple of my own examples to the table:

In a restaurant named "The Herb Farm" in Seattle area you don't pay when you eat, you pay before the dinner, when making a reservation by phone (the menu is prix-fixe). So when you're done eating you just walk out - no waiting for the check, no computing tips, none of that nonsense that should not be part of a pleasant dinner. It's incredibly liberating, in a way I would never understood if I was only told about it.

For another example, when you arrive to the restaurant "Canlis" they help you out of the car, and then you walk right into the place. When you're done eating, you walk right out and your car is already back there. They spot you on the way to the door and reshuffle the cars so that yours is at the front. So you just walk out, get in the car and drive off. As it turned out, searching for a place to park, parking, pocketing the keys etc etc is a huge mental overhead. I only realized that when I was liberated from it. The typical "valet" service actually does nothing for me - there is still overhead of asking for your car, waiting for it to be brought, tipping the valet. Meh.

Both places are charging a pretty penny for their services, so they can afford the "luxury" of good service, including good wages to the employees who make it possible. Sadly, neither does both of these things, which is still an open niche in Seattle dining. But more to the point at hand: where the restaurants make good service at cost of quality labor, computer systems could do the same at cost of good engineering, that is remove unnecessary delays and accidental complexity.

Your first example doesn't make sense to me. Why is the restaurant optimizing for the leaving experience? They should be optimizing for the eating experience. Like deciding to get an extra appetizer because the people you're with got one that looks really good. Or deciding to break your diet and get the dessert anyways.

It's prix fixe, the dinner consists of 14 courses and lasts over 3 or 4 hours. You don't get to chose food any more than you get to chose music. There's an exception for extra wine (that comes with a bill), but few people take it as what's sever is plentiful and well mar matched.

And there is plenty of interaction with people about food, just none about things that aren't food.

Turning tables is important for some restaurants, more so than getting an extra appetiser or dessert.

I'm flattered. And thankful to have my essay be voted to #3 on Hacker News.

I'm speaking at SXSW 2013 about "the best interface is not interface." This is an evolving idea, and I hope to have some gaps filled by the time I speak in March. So thank you all for your feedback.

I'm collecting more examples at nointerface.tumblr.com. It's meant to be an inspiration site for people interested in this movement. It's an idea Tumblr. And starting mid-January, I'm hoping to have two new posts a week, but that's mostly dependent on finding great examples. So, please tweet any to @goldenkrishna...Il give credit, of course.

> So when you're done eating you just walk out - no waiting for the check, no computing tips, none of that nonsense that should not be part of a pleasant dinner.

I dunno. If service and food was good, I like commending the staff about it. It would feel awkward to approach the staff without any reason, because I paid earlier already. Granted, I think this is the metaphor breaking. Interacting with the staff isn't just an interface, it's interacting with humans.

While having no interface is a nice idea, I think that there's also something to be said for "Make common things easy, rare things possible." That is, it's great to try to eliminate the need for an interface, but you can't assume or even expect that you'll always succeed, so you have to have an interface anyway.

Also, a lot of famous security vulnerabilities like the fact that Windows will execute things on a memory stick without being asked, are the result people trying for no interface. Having merchants billing you without you consenting seems really, really sketchy and maybe the video addresses this in a way that would satisfy me, but I'm sceptical.

a lot of famous security vulnerabilities like the fact that Windows will execute things on a memory stick without being asked, are the result people trying for no interface.

This is an excellent observation.

Re payments - consider situation where I keep going to the same coffee shop, and I just tell the barista to "add it to my tab" and then drink my coffee and leave. That would be very liberating, and I know the barista personally I would have no problem for them to keep the tab for me, and I would close it weekly or monthly.

Not so crazy now, iz it? If they could make it stretch to a point where I know the business but not the person working there, I'd be fine with it as well. And the system could learn typical usages, and close the tab for me automatically, or bring it up for review if needed.

And the system could learn typical usages, and close the tab for me automatically, or bring it up for review if needed.

This sounds like an awful idea. I don't trust another developer to accurately be able to predict how I use my money.

What if someone needed to pay a bill with that money this week, and then can pay off their tab next week. But your AI decides to pay off the tab, you have no way of getting the money back, and now your house has no power.

Or maybe this is only a system designed for upper middle class people who have floating money to do that with. Seems a bit short sighted to me.

Every system is designed to an audience, that's not shortsightness, that's called focus.

Yeah, if the proposed payment flow was ever implemented there would be abuses, there's a limit to how much interaction you can hide away. Especially in payments at the /very least/ there has to be some form of approval and usually there needs to be a payment method selection. No interface is nice but some things have to have interaction you can't magic away.

You mean like Square Wallet? It seems to be doing fine.

There's authorization in Square Wallet, and it's no where near the no interface that the article was drooling over.

Minor details aside, this is a spot-on analysis in the grand scheme of things.

Computers should be doing more for us. They're smart, they're good at logic, they can make decisions. It's our job as programmers to make the do more and allow us to do less.

I'm tired of complexity. I don't want a car that has a touch screen—I want one with a knob that's tactile and has a blue-to-red gradient that makes sense and only controls the thing it looks like it controls. I don't want to think about it. I don't want it to take three steps and two cluttered screens.

And if my fridge is going to be smart, I want it to be smart about being a fridge. I want it to do one thing smartly: make things cold. If it's going to do something else smartly, I want it to be relevant to keeping my food, so I don't know, figure out when I need new milk by allowing me to scan the barcode when I buy it. Then how about making it available on my phone so I can answer the age-old question "Do I need to buy milk?" when I'm at the grocery store. That might actually be useful.

But for pete's sake, if it has twitter, I will not buy it.

Computers should simplify our lives. If it adds complexity, screw you, start over and try again.

I don't want a car that has a touch screen

Hell yes. Whoever decided to put touchscreens into cars deserves the Darwin Award.

We live in a bit of an unfortunate age where perfectly time-tested interfaces are replaced with inferior touch-controls left and right, just because "we can".

This applies even to the canonical application; the phone. It's great how many things we can do on our phones now, but the core feature (telephony) has suffered badly. I can't take a call without looking at it anymore, much less place one. In the winter every incoming call turns into a little challenge (how quickly can I rid of that glove without dropping my precious $600 slab-of-glass onto the icy concrete?)

Rewind to the 1990s. Many people could tap an entire SMS on their Nokia without taking it out of the pocket. We could dial numbers without looking because we knew a contact is "four taps down" in the address book, and the buttons gave us a reassuring "click" when they were pressed.

The industry needs to re-discover tactile feedback and predictable latency as desirable traits. Early androids had a jog-dial (sony) and dedicated camera-buttons (HTC), but they largely disappeared for stupid reasons.

I really can't wait for Apple to re-"invent" physical controls in one of their future models. Perhaps the telephony-experience on our expensive pocket-computers will then finally catch up to what we had 20 years ago...

Exactly. Mackie is now making a little audio console that works with your iPad. Plug in your microphones and amps, slide in your iPad, and run it from there. The problem? There's no haptic feedback at all, nor any memorable locations! If a horrible noise starts being emitted, you can't just reach at a known area and slam down a master, you need to make sure you're on the right page on the right mix and then find it, and hope your touch latched.

I totally agree with everything you wrote, but since it is going to be a while yet before the industry learns any of these lessons...

You can pretty easily turn any set of gloves into touch-capable gloves using a small amount of conductive thread and sewing it through the pad area on the primary finger you use for touch.

I believe it's a matter of technology. Right now touch screens have so many advantages that they make up for the lack of tactile feedback, but as soon as technology allows it will be back.

> Computers should be doing more for us. They're smart,

But they are not smart, that's the problem. We have to program everything.

We [as developers] make them smart for everyone [as consumers].

The Google Wallet flow they describe is not correct. All you need to do is have the screen on (not unlocked) and hold the phone near the NFC reader. If you're not recently-authenticated, you need to type your PIN. That's it.

You do not need to unlock your phone or navigate to the Wallet app, and you don't need to select the credit card to use at payment time. Also worth noting is that tap-and-pay works even without a data connection.

The real lessons to learn from this are: people are paranoid about paying for things ("how will my phone know to make a payment if I'm not in the app?"), and people don't read documentation (the first few times you use Wallet, it's explained exactly how you make a payment).

One last thing to think about: creepiness. As a society, we have the technology to predict exactly what you are going to buy and when, and we can use cameras to recognize your face. So if you usually buy a latte every morning, the coffee shop could just make it in advance, and you could walk into the store and pick it up. The security tape would see your face picking up your coffee, and automatically deduct the money from your account. But I'm guessing that the HN crowd, despite their desire for convenience and technology, would hate that for privacy concerns. Do you really want your coffee shop tracking your every move? Who will they share that information with?

(Why is the complete lack of an interface creepy? Because nothing else we do is completely lacking in interface; usually you do something to get a result -- doing nothing to get the same result is weird.)

There is no need for the creepy camera - this can be done with Bluetooth 4 + geolocation (like Square does).

Everyone has a face. Not everyone has a phone with Bluetooth 4, a full battery, and a location fix.

(There's a reason why we carry around plastic cards for paying with things. They're cheap and simple.)

This is a well-trod point, but one that's always good to be reminded of. A similar argument is made in The Design of Everyday Things - if your interface needs an instruction manual, even if it's only one word (for instance the word "pull" on a door) then the interface is not doing its job.

I'd highly recommend DOET for anyone interested in this sort of thing.

Worst interface is one that's trying to learn about my habits without having a broader knowledge about my personality and the world as a whole. I don't want my axe to adapt to my hand and to my way of using it. Most of all I want my axe to be reliably predictable.

So no, thank you, no self-learning climate control systems, microwaves or lawn mowers.

Wouldn't you want to own a Roombamower that could automatically mow your lawn after a few times around manually? Or a microwave that could recognize types of food with an internal camera, and suggest optimal cooking times based on total times you've done in the past?

Maybe you'd rather stick with what you're comfortable with right now, but don't count out great ideas that you might not have thought of yet.

The problem with interfaces that learn from user behaviour is that they have to be almost perfect to be usable.

Simple, predictable interfaces that the user can understand and control are okay. Self-learning interfaces that do what the user wants are okay. But there is a gap between the two with poor (and even good but imperfect) learning interfaces.

A microwave that sets the timer itself would be great - unless it accidentally sets 5% of the meals on fire. Even a small failure rate is very frustrating, as the user cannot accurately predict and control their behaviour.

(It might be fine for a microwave - you can see that 20 minutes for a pack of popcorn is wrong, and cancel - but the same issues appear in many other interfaces.)

"The problem with interfaces that learn from user behaviour is that they have to be almost perfect to be usable."

You listed some practical engineering issues.

I can list numerous psychological issues, such as anxiety knowing the failure rate will probably be much worse than mine, the anxiety of not knowing how much extra time I'll have to spend on rework. Uncontrollable failure is stressful. Then there are more attitude level issues with being active "I'm making popcorn" vs passive "The microwave is making popcorn". This is before we get started with complacency "The microwave is really smart, so I'll let my kids use it. Whoops house burned down after 20 minute popcorn setting ... Guess I should feel guilty, or someone should anyway"

Wouldn't you want to own a Roombamower that could automatically mow your lawn after a few times around manually?

If a Roomba vacuum can vacuum the house without training, why would I have to bother training a lawn mower? Shouldn't it work without training?

Or a microwave that could recognize types of food with an internal camera, and suggest optimal cooking times based on total times you've done in the past?

"I see you're trying to make popcorn! I notice you always somehow always have the same brain fart when it comes to microwave popcorn and burn the ever-loving bejesus out of it, shall I set that up for you again?"

Every "self-adapting" interface that I can recall using was a disaster. Then again, the ones I can think of were Windows Start Menu and Microsoft Office features. They were really terrible, though.

I doubt the Roombamower would work that well without training. The Roomba is constrained by the walls, so it can just move about randomly without problem (except for stairs---unless it has a sensor to detect its about to tumble down a flight of stairs, you might need to "wall off" the stairs).

However, where I currently live, there are no such constraints for a Roombamower. There is no fence between our yard and the neighbor to the south. There is a fence between us and the north neighbor, but only in the backyard. The driveway isn't a good border because we still have a portion of our yard on the "far side" of the driveway (so the Roombamower can safely cross the driveway to keep mowing), but it shouldn't venture out into the street (no curb---and no sidewalk (another rant) to worry about). I'm not terribly concerned about our back property line as that's a nearly impenetrable thicket of native plant life (Florida, if you are curious).

In our situation, we would need to somehow inform the Roombamower the extent of our yard.

As far as the microwave goes, I can barely operate ours (and here I am, programming computers for a living) and would prefer two dials---power (linear scale is fine) and time (logarithmic would probably work nicely for this).

A microwave that can recognize food is not self-learning and I'm fine with that as far as it's predictable and does its job well.

Roomba mower that learns - that's where it gets a bit scary. I wouldn't be comfortable with it because I wouldn't know how and what it learns from me, also how it reacts to changes in the area to be mowed. What if it learns from me to ignore anything (or anyone) that wasn't there during the training phase by running over it?

I should hope it would include very paranoid and redundant safety systems going up from sucking in dust to cutting at high speed with blades, and that it wouldn't learn to override them! That's more about implementation than idea, though.

Exactly, and see how quickly we switched to algorithm issues that presumably should override the self-learned part.

"Worst interface is one that's trying to learn about my habits without having a broader knowledge about my personality and the world as a whole."

I'm a big fan of self-learning systems, in many cases. Tivo? I love it. A self-learning climate control system? Sure, as long as I can override it if I'm leaving for a week in January, for example.

Now picture a robotic lawn mower without that broader knowledge of the world. If it has limited sensors, it may mow over things you don't want it to; even if it does it less than you do yourself, it's a problem. How about if it cannot self-diagnose situations where it should not be working or has some of its safety features disabled for some reason?

For some equipment, being able to operate independently is a much harder problem than simply learning where the mowable area of your lawn is.

I have no problem with self-learning devices, so long as they come with and SDK that will let me override its settings.

I think the bigger point is, "Don't put touch screens on them either."

Is that list of steps to use Google Wallet in this article correct? If so, that wasn't the promise of NFC at all! I have an Android phone on which I use the legacy Japanese NFC system which doesn't require waking up the phone at all (it even works if the battery is depleted).

He says that tapping a device against another one is undesirable, but I think people like that kind of "I have to do this for money to disappear out of my account" reassurance.

I can picture it now :D

Me: "This isn't what I ordered."

Cashier: "It's what you ordered last time. We went ahead and made it for you. And we also charged your account. Aren't you delighted?"

Me: "But I just wanted a coffee. Now my account is overdrawn, and you just cost me a $35 overdraft fee."

Of course the decision point still needs to be maintained - that is, the human should retain the trigger on a predictive transaction unless they explicitly give it up, such as in recurring payments or that XKCD $1 bid bot! :-P

No, that list of steps is wrong. Google Wallet doesn't require you to unlock your phone or launch the Wallet app. You just hold your phone up to the reader and enter a PIN.

While no interface is better, it is not always the simplest or the most utilitarian.

The Mercedes proximity based, keyless entry system is actually a complex digital abstraction over a mechanical key/lock.

Mercedes has to address a bunch of security concerns such as preventing an adversary from sniffing my key information through my jacket pocket, digitally cracking codes, etc. Since the system tries to protect you from locking your key in the car, more technological components need to be thrown into the mix to detect if the key is inside the car. If the driver has to reach into his pocket to turn on the ignition it would defeat the purpose of going "keyless" so presumably the ignition system also gets a few layers of complexity. The cost of whole system would also go up. Repair and maintenance don't sound so appealing either. The whole thing can fail in a lot more ways.

I am not inferring it's a bad idea. It very well might become commoditized technology some time in the future and pave the way for other interesting possibilities.

I prefer utilitarian design which is more concerned with simplicity through and through rather than just minimizing the footprint for the user interface.

The title is wrong. I understand how the development community has gotten hung up on the absurd GUIs and CLIs we've had to use but that's not what the word interface means.

    Interface: A point where two systems, subjects, organizations, etc., meet and interact.
A door handle is an interface, a burglar alarm is an interface, etc. The term you're actually looking for is "invisible interface" instead of in-your-way interfaces. But if you wish to have the ability to interact with a system, you cannot remove its interface...

One thing that mustn't be over-looked with interfaces that 'learn about your behavior' is they can lock into a 'local maxima' and can be difficult to retrain without resetting to factory defaults. - If your lifestyle changes, can the interface keep up?

That could happen, but really that would be a "bug", not an inherent problem with the design. That would be the developer's job to fix.

I disagree that this would be a bug. It's a design flaw that cannot be corrected by fixing bugs.

It's analogous to security flaws. If there is a flaw in the design, no amount of bug fixing will make the system secure, unless that 'bug fixing' changes the design.

Can you explain why you think it's an inherent flaw in the design?

The way I look at it is, yes the software should keep a history of user behaviour and base its actions off that, but there must be feedback involved, either explicit or implicit. This way, if I gave some input to the system once but then never did so again, the likelyhood that one event should affect the future would diminish over time.

There could be trickiness around "Bubbles" (like a Search bubble, where it only recommends to you things it thinks you'd like, and never shows you other things). I think those are problematic and should be dealt with. But I don't think that means it's impossible to fix. It's just something that needs to be thought through. I don't have an answer for it right now but that doesn't mean there isn't an answer.

> It's just something that needs to be thought through.

Your statement is what I mean. "Thinking things through" should be done during design. Once you have built system, its much harder to compensate for design flaws.

Programming is not designing. Designing is not programming. Fixing bugs is not designing.

You have to design into the UI system a means for it to compensate for changes in user behavior. You don't want a system that takes many uses to train. At the same time you don't want a system that is trained by a single use. For me this is the crux of the problem.

The happy medium that automatically detects deviations from a user's 'normal' behavior _and_ takes the correct action is very hard to design, as it involves AI fuzzy logic.

>Your statement is what I mean. "Thinking things through" should be done during design. Once you have built system, its much harder to compensate for design flaws.

I agree with this fully. This is something that would need to be solved before building such a system.

A great example of a system which learns from history, but which also supports changes in behaviour is: http://worrydream.com/MagicInk/#engineering_inference_from_h...

The linked example is specific to one application, but he continues to detail what he thinks would make for a general solution.

This whole thing sounds an awful lot like Bret Victor's “Magic Ink” paper (that's a compliment): http://worrydream.com/MagicInk

If you like the ideas in the OP, you owe it to yourself to chew through Bret's paper. A lot of the same ideas, expanded and thought through.

The best interface is hiding all the steps of a complex process and saying you did away with the interface.

No steps = easy to debug when it goes wrong. You just point at it and loudly whine: "It's not working!" then nobody fixes it because all the back end is "magic".

One things that came to mind when reading your comment was some of my frustration using OS X. When an app in OS X or OS X itself stops working you don't get a blue screen or a "this application has crashed" error - it just stops working. Many of my friends would have the false illusion that OS X is much more stable than Windows 7 because they see less error messages.

I used to see the "spinning beachball of death" scenario every now and then, but on Lion and a 2011 Macbook Air I haven't once. Occasionally third party applications will exit and there will be a "____ has quit unexpectedly" message with an option to relaunch.

Details are available in logfiles; anyone who has a use for those details most likely knows where to find them. The end user can't do anything with memory addresses, etc. so it makes sense that they don't seem them.

Of course, Apple provides the details for those who care to look, just that it hides the information from the everyday user who doesn't want to. This is just a personal opinion, but just failing is probably less scary to the everyday user than being immediately presented with a bunch of error messages/a blue screen error message - even though they might mean the exactly the same thing.

Hmm, but you do always get an error message when you relaunch, allowing you to report the error or see some details.

As far as ergonomics I always liked the Mercedes door handles better than those of some other cars, where you can only open the door by gripping from underneath. Interface design and ergonomics go hand in hand.

I think the iPhone is a good example with its one button design and size, as opposed to clunkier cellphones with 3 or more buttons.

1. The iPhone does not have just one button. It has several other buttons at the side, and it has a GUI.

2. The one-front-button deal is one of the many things I hate about iPhones. The "go back" operation is something we do often on smart phones. I want my phone to have that at the front. I also like trackballs there.

I often cite this as one example of how I find the Android interface more efficient and usable. "Go back" applies intuitively to such a broad range of apps, and benefits enough from a constant, ergonomic placement and tactile feedback, that that alone makes a difference in feel.

I hate, hate, hate that Android doesn't have a physical "home" button and instead those terrible soft system buttons. First, I pretty much never know which button to hit and what's going to happen. 2nd, they waste valuable space being unnecessarily "always on". And my kids are flat out unable to use my Nexus 7 without constantly accidentally hitting this buttons. Yuck, yuck, yuck.

Designers often forget about this in their urge to overdesign and show-off. That's probably why you should have a strictly UI/UX person on your team, who can say no and strip the clutter.

The problem with "AI" and endusers is we humans are flawed enough to have a large and only semi-effective science and industry focusing on what amounts to interface failures between "intelligences". Abnormal psych, couples counseling, that sort of thing.

True, you do need to worry about the engineering and stats and this might be close to solved for some trivial problems.

Yes you also need to worry about the interactions between AI and "normal" people and this is nowhere near solved even for trivial problems but its been slowly improving for decades.

The biggest problem is debugging interactions between AI and "AB-normal" people. How should the AI react when rubbed up against a OCD person, or a psychopath, or a developmentally disabled enduser?

This I believe to be the fundamental failure mode for AI in enduser products, probably enforced by the greedy legal system. If you ignore the most vulnerable members of the population you knowingly released a product that kills them, thats not going to turn out well. Or you can hyperoptimize it such that your lawnmower is better at dealing with psychopaths than the smartest human, in which case its hyperregulated by the medical system up to unaffordable cost.

I don't like the idea of AI and a computer in everything I own. Who stores/owns all that information? It also seems like a completely unnecessary security risk(people spying on you by hacking into your fridge :P).

The solution is a lot simpler, don't make me use a computer for everything. I can open my car with my key, Pay using money(or even a bank or credit card).

Twitter in you're car and Apps on the fridge only exist because of the App hype. I don't think they will last.

In other words, the best UI is AI.

Reminds me of this: http://www.youtube.com/watch?v=aXV-yaFmQNk (A Magazine Is an iPad That Does Not Work)

Not that I disagree, it is just fun to think about possible unintended consequences of dependency on AI for every day tasks. I am sure there are some short stories that deal with people in an advanced civilization losing their automation, but I can't seem to track any down on Google at the moment.


Yeah, right. At some point on that curve the UI would grow arms and make me my favorite breakfast every morning. Objects in the world are innately limited by the causes they have in their origin. An pear tree can only ever produce pears unless what are encoded in its seed are changed.

The core thought is of value and basically the direction in which HCI is headed.

Other than that, wow. As a UX designer, I would expect the author to show more critical thinking when evaluating the interfaces like a car dashboard and refrigerator. Put those into context as you conveniently do with interaction patterns like opening car doors and paying with ewallets.

From August of this year, but still nice to see this again.

I don't even need to read the blog post. The title says it all. djb wrote about this in the docs for qmail many years ago.

My idea of a great "user experience":

I switched it on/started it up, it did what it's supposed to do, in a predictable span of time, without asking me questions or requiring me to fiddle with anything.

My response to the article: http://wireframes.linowski.ca/2012/12/calling-your-bull-the-... I think it's stretched. Interfaces still have good characteristics.

Do not underestimate the power of the command line.

Tiny point: the article incorrectly implies chkntfs and atmadm were original early 1980's DOS commands, but they were circa 1990's.

Not to mention that it equates all CLI's with that horrible, no-tab-completion abomination that was the DOS command prompt. Sure, the commands in UNIX (and variants) were cryptic, but at least you could have real filenames (not that 8.3 nonsense) and do real programming with the shell, pipes, job control, and of course the already mentioned tab-completion. If you don't understand why some people still love the CLI, you probably haven't used a good one, or don't understand the power a good one gives you. DOS CLI was to UNIX CLI what Win95 was to OS/2 multitasking.

Good article. Not sure why I feel I already read that stuff somewhere.

Lost interest half way..

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact