1. Every story is a 'just so' story where the way the system works is exactly the way the user wants the system to work. Great, it's like putting a button front and center on your app and the user wants to push that button and look! It's right there! Awesome. Except I don't want to push that button, I want to push the other button that's now hidden away because the designer is striving for "No UI". The new minimalist interface is now actively fighting me. What if I don't want my car to unlock when I approach it?
2. Using AI is a "step 2: ???" solution, and the flaw with it is best exemplified by the columnist who bought a pregnancy book for a pregnant friend on Amazon... and now his Amazon suggestion stream is now still filled with baby clothes that are age appropriate for his friend's baby, years later. For every adaptive step the non-interface takes on the basis of past behaviour, it's a step away from future behaviour that's different than past behaviour.
I suppose my criticisms come down to the fact that the article doesn't seem to acknowledge that design trade-offs are a normal part of interface work. It strongly implies there's a hallowed land where every use case is obvious and accounted for and we all just "do". Annoying, in the same way that we all get annoyed when another framework comes along and is in its silver bullet phase where it's the awesomest solution to everything you need it to do.
Flaws like this might exist right now, but that's really a "bug". It's a failure on our part (the software designers and engineers) to build a system that actually works for the human. So sometimes when we try to make our software smart, it ends up being [maddeningly] stupid. But again, this is only because we don't do our jobs right, not that they can't be done right.
So with a little more smarts from us, we could make a system that's a lot more flexible and allow for these sorts of things. We have to acknowledge that a user shouldn't have to fit our system exactly and we should allow for "Buying a friend some baby clothes but not have it think we have a child" cases by being smarter about how the system learns. We need to provide it with better context.
I suggest that this is an intractable problem just because there's a fundamental tension between ease and flexibility. 'Optimize for the common case' always seems like a good idea (and the AI suggestion is just 'allow past history to determine the common case'), but we all have different common cases, and this heuristic helps us not all when we don't want to execute the common case, which is the hardest part of the design problem anyway.
I agree with the idea, generally, that we need to be smarter about how things work for people. I think "No UI" is like "No SQL": An appropriate solution in a certain set of cases, but nothing like a general solution for all.
That's part of the implementation, again, I think. It would be silly for the implementation to treat just once instance of a behaviour so seriously. If the user buys it once, but then never shows any interest in such products again, then they should slowly be ranked less and less relevant. If the user instead continued to purchase or look at baby items, then the system could have more certainty of desire in them. Again, it's not just the products themselves, but the behaviour surrounding them. Temporal context is important, too.
I agree fully that just heaping on more interface for "special cases" is a bad idea. We just need to be smarter about...how our software is smart.
It's a pretty good case study: even though the UI is there, front and center (the 'fix this' link is usually right underneath recommendations), it doesn't get used because most users would rather do 'Step 1: get good recommendations' rather than 'Step 1: get mixed-quality recommendations; Step 2: take action to improve recommendations; Step 3: get good recommendations'.
* And, of course (standard disclaimer) I don't speak for or represent Amazon in any way...
camtarn already explains how you make this part of the natural interface. I'm going to address the apparent confusion you have in assuming that in removing the interface, you do not need to eliminate it. For example, in the article, the discussion about approaching the car should open the door does not mean that you remove the interface of lock and key.
> An appropriate solution in a certain set of cases, but nothing like a general solution for all.
And the article isn't advocating for the removal of all UIs. It even goes on to advocate certain UIs. Don't just read the headline. Take a few minutes and consider what's being said.
> that's really a "bug"
This thinking worries me a lot, especially when applied to what Google is heading for.
So now many people are removing all buttons and settings, because they think that every button or setting is a failure to guess the user's need.
The epitome of this move it probably Google Now, and I fear it is going to fail or to stink. My instance of Google Now tries to guess when I will drive back home and notify me with a traffic estimate: Until now, it was successfull and mildly useful once in 20 tries. So in fact, I have been spam-notified 19 times by Google itself! And me driving home has a relatively high predictability, compared to buying books, going out, etc.
I just read Thinking Fast and Slow for the second time, and the gist of it, in this context, is that the world is much less predictable than we think it is. If the Bayesian statistical approach or even simple formulas work better than intuition or experts in so many domains (financial, political forecasting, clinical diagnostic, etc), it doesn't mean it should be used, it means these domain have very low prediction validity. For instance, the only valid and proven prediction for next year stock price is "80% chance to be between -10% and +30% of current stock price", and this is useless common sense. Just as the only correct explanation for above the average earnings this year at any company is "below-average results last year produced lower expectations , and we are regressing to the mean".
Moreover, if some few things are to be found predictable in my life, I probably would not want to be recalled of this saddening fact. Suppose you are hiking next week-end in the mountain nearby. If a Google Now thingy tells me, two hours before leaving, that I should not forget my socks, it will give me the same feeling as a caring and annoying mother: "Yes Mom, call you there; No Mom, I'll not forget to wash me hands..." Who did like this when a teenager?
So, either personal life events are not predictable, or they should not be predicted because most people would rather keep the illusion they are in control.
What technology need to do is to empower users with helpful (or funny) tools. When I am treking somewhere in nowhere land, I feel empowered to be able to pull a satellite map and GPS positioning from my phone, and I use it whether I'm a bit lost or I just want to check the road again. But I really really would hate it if the next Google map would detect from my speed that I am hesitating, and pop a "Just turn left after the next oaktree to get back to your car, you sucker".
Google's new trend is in parenting its user, I am not sure it is the correct move.
I don't know anyone who thinks the world is predictable. Anyone. Your problem is scope. The world is not predictable. But small pieces are. And the world is made up of many, many small pieces. The challenge is discovering those small pieces. Not all are obvious.
And those pieces which we have discovered we can predict at a reasonable level, we ignore.
Take automatic doors at the entrance of stores. We approach the entrance, the door recognizes this, and the doors open. This is predication. We are predicting that someone approaching in a particular area will most likely want to enter the store, so we prepare by opening the door. This isn't 100%, but it's an overwhelmingly effective prediction.
> Who did like this when a teenager?
I'm not a teenager anymore though, so your reasoning doesn't follow. Your upset that if you are lost, and you want to find your way back, Google will, maybe, tell you?
Seems like your problem is personal, and frankly, a bit selfish. It's easily solved by not using the tools that you do no want to use.
That land might not exist, but for every problem we try to solve, we should search for that land relentlessly, as if it did actually exist. It's a not a framework, not means to an end, it's a goal, an end in itself.
Disable it and choose to use the smart phone app. They are not mutually exclusive, and you've already used services just like this. Automatic doors that open as you approach them at a store? And when they aren't powered, you can still open them manually. Bad implementations won't open if they aren't powered.
2. it is best exemplified
If that's your best example, I feel bad. It's merely a flaw in the system, easily resolved. It's the powered door that won't open without power.
Designing these intelligent interactions is not easy. And dealing with exceptions is challenging. They are, after all, exceptions. But don't assume that designing these interactions should account for every exception. Rather, it should make normal easy and make exceptional resolvable.
Then there was Lotus, which (at the time it was written) driven by a text-based command line. Type '/', and the cursor would drop to the penultimate line; a list of valid commands/options would appear on the bottom line. You could either type the command/option, or use the arrow/tab keys to select it. And as you typed, the bottom line would change to always list valid options (or a description of what to type next, like "(type a number)").
Heck, even VMS had an extensive help system available at the command line (type 'help'---it was interactive). Please don't judge all CLIs by Unix and MS-DOS.
Don't judge all CLIs by Unix or MS-DOS.
Being presented with a blue screen in WordPerfect is a minimal interface, but to do anything you have to learn how to use it. You have to learn how to get the menus up and etc.
And while good design is important, and tends to reduce interface clutter, it's also important to let users have easy access to the stuff they need.
Reducing the interface to what users need is a tricky skill, and risks angering power users. (Google dropping + to force inclusion is a notable example.)
If on average the technique works across the entire customer base, then they will apply it.
While it would certainly be better if they were more accurate in that fringe case as well, if it would take 4x the effort to eek out a 20% improvement, that might not be a good trade off for them.
In a restaurant named "The Herb Farm" in Seattle area you don't pay when you eat, you pay before the dinner, when making a reservation by phone (the menu is prix-fixe). So when you're done eating you just walk out - no waiting for the check, no computing tips, none of that nonsense that should not be part of a pleasant dinner. It's incredibly liberating, in a way I would never understood if I was only told about it.
For another example, when you arrive to the restaurant "Canlis" they help you out of the car, and then you walk right into the place. When you're done eating, you walk right out and your car is already back there. They spot you on the way to the door and reshuffle the cars so that yours is at the front. So you just walk out, get in the car and drive off. As it turned out, searching for a place to park, parking, pocketing the keys etc etc is a huge mental overhead. I only realized that when I was liberated from it. The typical "valet" service actually does nothing for me - there is still overhead of asking for your car, waiting for it to be brought, tipping the valet. Meh.
Both places are charging a pretty penny for their services, so they can afford the "luxury" of good service, including good wages to the employees who make it possible. Sadly, neither does both of these things, which is still an open niche in Seattle dining. But more to the point at hand: where the restaurants make good service at cost of quality labor, computer systems could do the same at cost of good engineering, that is remove unnecessary delays and accidental complexity.
And there is plenty of interaction with people about food, just none about things that aren't food.
I'm speaking at SXSW 2013 about "the best interface is not interface." This is an evolving idea, and I hope to have some gaps filled by the time I speak in March. So thank you all for your feedback.
I'm collecting more examples at nointerface.tumblr.com. It's meant to be an inspiration site for people interested in this movement. It's an idea Tumblr. And starting mid-January, I'm hoping to have two new posts a week, but that's mostly dependent on finding great examples. So, please tweet any to @goldenkrishna...Il give credit, of course.
I dunno. If service and food was good, I like commending the staff about it. It would feel awkward to approach the staff without any reason, because I paid earlier already. Granted, I think this is the metaphor breaking. Interacting with the staff isn't just an interface, it's interacting with humans.
Also, a lot of famous security vulnerabilities like the fact that Windows will execute things on a memory stick without being asked, are the result people trying for no interface. Having merchants billing you without you consenting seems really, really sketchy and maybe the video addresses this in a way that would satisfy me, but I'm sceptical.
This is an excellent observation.
Not so crazy now, iz it? If they could make it stretch to a point where I know the business but not the person working there, I'd be fine with it as well. And the system could learn typical usages, and close the tab for me automatically, or bring it up for review if needed.
This sounds like an awful idea. I don't trust another developer to accurately be able to predict how I use my money.
What if someone needed to pay a bill with that money this week, and then can pay off their tab next week. But your AI decides to pay off the tab, you have no way of getting the money back, and now your house has no power.
Or maybe this is only a system designed for upper middle class people who have floating money to do that with. Seems a bit short sighted to me.
Computers should be doing more for us. They're smart, they're good at logic, they can make decisions. It's our job as programmers to make the do more and allow us to do less.
I'm tired of complexity. I don't want a car that has a touch screen—I want one with a knob that's tactile and has a blue-to-red gradient that makes sense and only controls the thing it looks like it controls. I don't want to think about it. I don't want it to take three steps and two cluttered screens.
And if my fridge is going to be smart, I want it to be smart about being a fridge. I want it to do one thing smartly: make things cold. If it's going to do something else smartly, I want it to be relevant to keeping my food, so I don't know, figure out when I need new milk by allowing me to scan the barcode when I buy it. Then how about making it available on my phone so I can answer the age-old question "Do I need to buy milk?" when I'm at the grocery store. That might actually be useful.
But for pete's sake, if it has twitter, I will not buy it.
Computers should simplify our lives. If it adds complexity, screw you, start over and try again.
Hell yes. Whoever decided to put touchscreens into cars deserves the Darwin Award.
We live in a bit of an unfortunate age where perfectly time-tested interfaces are replaced with inferior touch-controls left and right, just because "we can".
This applies even to the canonical application; the phone. It's great how many things we can do on our phones now, but the core feature (telephony) has suffered badly. I can't take a call without looking at it anymore, much less place one. In the winter every incoming call turns into a little challenge (how quickly can I rid of that glove without dropping my precious $600 slab-of-glass onto the icy concrete?)
Rewind to the 1990s. Many people could tap an entire SMS on their Nokia without taking it out of the pocket. We could dial numbers without looking because we knew a contact is "four taps down" in the address book, and the buttons gave us a reassuring "click" when they were pressed.
The industry needs to re-discover tactile feedback and predictable latency as desirable traits. Early androids had a jog-dial (sony) and dedicated camera-buttons (HTC), but they largely disappeared for stupid reasons.
I really can't wait for Apple to re-"invent" physical controls in one of their future models. Perhaps the telephony-experience on our expensive pocket-computers will then finally catch up to what we had 20 years ago...
You can pretty easily turn any set of gloves into touch-capable gloves using a small amount of conductive thread and sewing it through the pad area on the primary finger you use for touch.
But they are not smart, that's the problem. We have to program everything.
You do not need to unlock your phone or navigate to the Wallet app, and you don't need to select the credit card to use at payment time. Also worth noting is that tap-and-pay works even without a data connection.
The real lessons to learn from this are: people are paranoid about paying for things ("how will my phone know to make a payment if I'm not in the app?"), and people don't read documentation (the first few times you use Wallet, it's explained exactly how you make a payment).
One last thing to think about: creepiness. As a society, we have the technology to predict exactly what you are going to buy and when, and we can use cameras to recognize your face. So if you usually buy a latte every morning, the coffee shop could just make it in advance, and you could walk into the store and pick it up. The security tape would see your face picking up your coffee, and automatically deduct the money from your account. But I'm guessing that the HN crowd, despite their desire for convenience and technology, would hate that for privacy concerns. Do you really want your coffee shop tracking your every move? Who will they share that information with?
(Why is the complete lack of an interface creepy? Because nothing else we do is completely lacking in interface; usually you do something to get a result -- doing nothing to get the same result is weird.)
(There's a reason why we carry around plastic cards for paying with things. They're cheap and simple.)
I'd highly recommend DOET for anyone interested in this sort of thing.
So no, thank you, no self-learning climate control systems, microwaves or lawn mowers.
Maybe you'd rather stick with what you're comfortable with right now, but don't count out great ideas that you might not have thought of yet.
Simple, predictable interfaces that the user can understand and control are okay. Self-learning interfaces that do what the user wants are okay. But there is a gap between the two with poor (and even good but imperfect) learning interfaces.
A microwave that sets the timer itself would be great - unless it accidentally sets 5% of the meals on fire. Even a small failure rate is very frustrating, as the user cannot accurately predict and control their behaviour.
(It might be fine for a microwave - you can see that 20 minutes for a pack of popcorn is wrong, and cancel - but the same issues appear in many other interfaces.)
You listed some practical engineering issues.
I can list numerous psychological issues, such as anxiety knowing the failure rate will probably be much worse than mine, the anxiety of not knowing how much extra time I'll have to spend on rework. Uncontrollable failure is stressful. Then there are more attitude level issues with being active "I'm making popcorn" vs passive "The microwave is making popcorn". This is before we get started with complacency "The microwave is really smart, so I'll let my kids use it. Whoops house burned down after 20 minute popcorn setting ... Guess I should feel guilty, or someone should anyway"
If a Roomba vacuum can vacuum the house without training, why would I have to bother training a lawn mower? Shouldn't it work without training?
Or a microwave that could recognize types of food with an internal camera, and suggest optimal cooking times based on total times you've done in the past?
"I see you're trying to make popcorn! I notice you always somehow always have the same brain fart when it comes to microwave popcorn and burn the ever-loving bejesus out of it, shall I set that up for you again?"
Every "self-adapting" interface that I can recall using was a disaster. Then again, the ones I can think of were Windows Start Menu and Microsoft Office features. They were really terrible, though.
However, where I currently live, there are no such constraints for a Roombamower. There is no fence between our yard and the neighbor to the south. There is a fence between us and the north neighbor, but only in the backyard. The driveway isn't a good border because we still have a portion of our yard on the "far side" of the driveway (so the Roombamower can safely cross the driveway to keep mowing), but it shouldn't venture out into the street (no curb---and no sidewalk (another rant) to worry about). I'm not terribly concerned about our back property line as that's a nearly impenetrable thicket of native plant life (Florida, if you are curious).
In our situation, we would need to somehow inform the Roombamower the extent of our yard.
As far as the microwave goes, I can barely operate ours (and here I am, programming computers for a living) and would prefer two dials---power (linear scale is fine) and time (logarithmic would probably work nicely for this).
Roomba mower that learns - that's where it gets a bit scary. I wouldn't be comfortable with it because I wouldn't know how and what it learns from me, also how it reacts to changes in the area to be mowed. What if it learns from me to ignore anything (or anyone) that wasn't there during the training phase by running over it?
I'm a big fan of self-learning systems, in many cases. Tivo? I love it. A self-learning climate control system? Sure, as long as I can override it if I'm leaving for a week in January, for example.
Now picture a robotic lawn mower without that broader knowledge of the world. If it has limited sensors, it may mow over things you don't want it to; even if it does it less than you do yourself, it's a problem. How about if it cannot self-diagnose situations where it should not be working or has some of its safety features disabled for some reason?
For some equipment, being able to operate independently is a much harder problem than simply learning where the mowable area of your lawn is.
He says that tapping a device against another one is undesirable, but I think people like that kind of "I have to do this for money to disappear out of my account" reassurance.
Me: "This isn't what I ordered."
Cashier: "It's what you ordered last time. We went ahead and made it for you. And we also charged your account. Aren't you delighted?"
Me: "But I just wanted a coffee. Now my account is overdrawn, and you just cost me a $35 overdraft fee."
The Mercedes proximity based, keyless entry system is actually a complex digital abstraction over a mechanical key/lock.
Mercedes has to address a bunch of security concerns such as preventing an adversary from sniffing my key information through my jacket pocket, digitally cracking codes, etc. Since the system tries to protect you from locking your key in the car, more technological components need to be thrown into the mix to detect if the key is inside the car. If the driver has to reach into his pocket to turn on the ignition it would defeat the purpose of going "keyless" so presumably the ignition system also gets a few layers of complexity. The cost of whole system would also go up. Repair and maintenance don't sound so appealing either. The whole thing can fail in a lot more ways.
I am not inferring it's a bad idea. It very well might become commoditized technology some time in the future and pave the way for other interesting possibilities.
I prefer utilitarian design which is more concerned with simplicity through and through rather than just minimizing the footprint for the user interface.
Interface: A point where two systems, subjects, organizations, etc., meet and interact.
It's analogous to security flaws. If there is a flaw in the design, no amount of bug fixing will make the system secure, unless that 'bug fixing' changes the design.
The way I look at it is, yes the software should keep a history of user behaviour and base its actions off that, but there must be feedback involved, either explicit or implicit. This way, if I gave some input to the system once but then never did so again, the likelyhood that one event should affect the future would diminish over time.
There could be trickiness around "Bubbles" (like a Search bubble, where it only recommends to you things it thinks you'd like, and never shows you other things). I think those are problematic and should be dealt with. But I don't think that means it's impossible to fix. It's just something that needs to be thought through. I don't have an answer for it right now but that doesn't mean there isn't an answer.
Your statement is what I mean.
"Thinking things through" should be done during design.
Once you have built system, its much harder to compensate for design flaws.
Programming is not designing.
Designing is not programming.
Fixing bugs is not designing.
You have to design into the UI system a means for it to compensate for changes in user behavior. You don't want a system that takes many uses to train. At the same time you don't want a system that is trained by a single use.
For me this is the crux of the problem.
The happy medium that automatically detects deviations from a user's 'normal' behavior _and_ takes the correct action is very hard to design, as it involves AI fuzzy logic.
I agree with this fully. This is something that would need to be solved before building such a system.
A great example of a system which learns from history, but which also supports changes in behaviour is: http://worrydream.com/MagicInk/#engineering_inference_from_h...
The linked example is specific to one application, but he continues to detail what he thinks would make for a general solution.
If you like the ideas in the OP, you owe it to yourself to chew through Bret's paper. A lot of the same ideas, expanded and thought through.
No steps = easy to debug when it goes wrong. You just point at it and loudly whine: "It's not working!" then nobody fixes it because all the back end is "magic".
Details are available in logfiles; anyone who has a use for those details most likely knows where to find them. The end user can't do anything with memory addresses, etc. so it makes sense that they don't seem them.
I think the iPhone is a good example with its one button design and size, as opposed to clunkier cellphones with 3 or more buttons.
2. The one-front-button deal is one of the many things I hate about iPhones. The "go back" operation is something we do often on smart phones. I want my phone to have that at the front. I also like trackballs there.
True, you do need to worry about the engineering and stats and this might be close to solved for some trivial problems.
Yes you also need to worry about the interactions between AI and "normal" people and this is nowhere near solved even for trivial problems but its been slowly improving for decades.
The biggest problem is debugging interactions between AI and "AB-normal" people. How should the AI react when rubbed up against a OCD person, or a psychopath, or a developmentally disabled enduser?
This I believe to be the fundamental failure mode for AI in enduser products, probably enforced by the greedy legal system. If you ignore the most vulnerable members of the population you knowingly released a product that kills them, thats not going to turn out well. Or you can hyperoptimize it such that your lawnmower is better at dealing with psychopaths than the smartest human, in which case its hyperregulated by the medical system up to unaffordable cost.
The solution is a lot simpler, don't make me use a computer for everything. I can open my car with my key, Pay using money(or even a bank or credit card).
Twitter in you're car and Apps on the fridge only exist because of the App hype. I don't think they will last.
Not that I disagree, it is just fun to think about possible unintended consequences of dependency on AI for every day tasks. I am sure there are some short stories that deal with people in an advanced civilization losing their automation, but I can't seem to track any down on Google at the moment.
Yeah, right. At some point on that curve the UI would grow arms and make me my favorite breakfast every morning. Objects in the world are innately limited by the causes they have in their origin. An pear tree can only ever produce pears unless what are encoded in its seed are changed.
Other than that, wow. As a UX designer, I would expect the author to show more critical thinking when evaluating the interfaces like a car dashboard and refrigerator. Put those into context as you conveniently do with interaction patterns like opening car doors and paying with ewallets.
I don't even need to read the blog post. The title says it all. djb wrote about this in the docs for qmail many years ago.
My idea of a great "user experience":
I switched it on/started it up, it did what it's supposed to do, in a predictable span of time, without asking me questions or requiring me to fiddle with anything.