If you must embrace high-tech in the car cockpit, voice control is fine (if it works well), but touchscreens are horrible in this environment.
Take the infamous iDrive example. For all the good design that BMW puts into their cars, it all suddenly went out of the window when it came to software. It's like when the software engineers showed up, everyone else threw up their hands and said "take it away--we don't care how poorly usable it is, for it is 'cool' and that's what people want."
For example, if you wish to change a radio station to another preset, and you have the misfortune of being elsewhere in the touchscreen UI, you must first navigate to the Radio screen, then switch to Presets (working off slightly hazy memory here, so pardon inexactness if there is one). How is that better than just whacking the button for the preset # on the radio?
A must read-book for any techie, in my opinion, is "The Inmates Are Running the Asylum" by Alan Cooper. It gave me a new perspective on computing. If you do any kind of software design that is used by a human, you must read it.
So that book starts out with "Riddles for the Information Age" and asks you what happens when you cross a computer with: an airplane, a camera, an alarm clock, a car, etc. As you might guess, the answer is that things did not go well.
Ironically and incredibly, with current Chrome on Mac, I am unable to scroll to the bottom of the page on their site describing the book: http://www.cooper.com/#about:books
Kindly, the old working version of their site is still available: http://www.cooper.com/about/books.html
1) There are 8 programmable buttons for radio stations/destinations and maybe some other things. They are numbered.
2) Steering wheel buttons for audio input selection and track/station up down and volume.
3) iDrive gestures of pushing and holding East for navigation and South for entertainment (I don't have a phone kit as I don't want to call while driving, that would be the North direction).
So I only need spin the iDrive for setting destinations or scrolling though the contents of the connected iPod. I find the iDrive with the mentioned shortcuts a much better concept for a car than a touchscreen (at until you can feel the screen content without looking).
I find a recent Mini much worse for missing the programmable buttons and possibly the hold to different compass bearings.
I still moan the loss of the jog-wheel on the blackberry devices, dammed useful for scrolling down long lists in a controlled way without obscuring the screen or having to play with a trackball designed for mice!
Another argument of interface were dials win over touch interfaces would perhaps be DJ mixers. Touch interfaces will only get better, but will never replace the tactile and visual static presence of a actual knob/button for most. I want my phone which has a button you press to answer calls, not a virtual one.
In automobiles, there's an increasing tendency to soft/electronic controls for cabin heating/cooling. Which is ... annoying.
My preference is for the three-control (plus A/C switch) control first introduced by Japanese automakers. One for fan speed (and off), one for vent settings, one for heat mix. The switch enables/disables AC.
Compare with the standard American design at the time which had a fan speed switch, heat control, and a multi-function slider combining both vent settings and AC. The end result being more complexity and fewer available settings (want to direct cold only at your feet, or blow the windshield without AC -- no can do).
Agreed regarding touchscreen. I doubt even voice will work as reliably as physical controls though.
This. I much preferred the click wheel and button of my old iPod Nano to the touchscreen of my new one. I could pause, skip, rewind, and adjust the volume on the old one without looking at it.
It's amazing how far this can be taken.
In any well-designed interaction it needs to be clear what you (the user) can do, and what the state of the system you're interacting with is. 'No UI' only works when you're augmenting a system where these two things are already clear. For instance, in the case of the car door system, you already know that you can open the car with your key, and you can tell when the car door unlocks.
When I open a new app for the first time, I don't already know everything it can do. I need to see the interface to know what's possible. And I need to see feedback to know that I'm making progress.
I've heard that many Nest owners are actually a bit disappointed in its smart features, as there's no way to tell why it's doing the things it's doing (why did it just make it cold in here?) Without a way of communicating its reasoning, people are suspicious of its "father knows best" recommendations. Even Amazon tells you roughly why it's recommending something to you.
And Voice UIs don't count as no UI. In fact, they're often a very poor interface, as they convey information much more slowly and invasively than a visual interface, and there's no a priori way to know what voice commands a system accepts.
1. Get (real) wallet.
2. Find the right card.
3. Swipe the card.
4. Press 'Debit' on the machine.
5. Press 'No' to the cash back prompt.
6. Enter my PIN.
1. Get out phone.
2. Wake the screen.
3. Touch the phone to the machine.
4. Enter my PIN on the Google Wallet prompt that appears.
5. Confirm the payment.
If they were to list the steps for paying with a normal credit card would their list include "find the teller's hand so you can give them the card"?
Edit: How do I <ol>?
1. Get out phone
2. Tap phone on sensor
The NFC system in Japan (standard on nearly all phones except iPhone) works by putting virtual cash into some chip that doesn't need the phone. In fact you can do it with your train pass which was the first way they did it and then later added the same chip to the phone's case.
So, no need to turn on the phone or choose an app. To add cash to the chip there is an app so basically you add $50-$200 and then don't worry about it for a week or month.
Since they started as train passes you can also rid all the trains, subways and buses (take a out phone, tap on sensor, done).
You can even reserve seats for long distance trains on your phone, walk on the train, there's a sensor above the seat you tap to "check in". Tap it again to check out if you want to switch seats.
The chip holds all the transactions on it. My 2006 Sony Vaio has a reader built in for the chip which can import that transactions for things like expense reports. I would guess that more current phones have apps for reading the chip.
Hacker News, sadly, supports very little formatting. http://news.ycombinator.com/formatdoc describes all the formatting HN supports for comment, submission, and user profile text. <ol> is not supported. The common workaround seems to be using separate paragraphs for each list item, though some people put the whole list in a code block instead.
Key + keyhole = interface.
A keyhole that "knows" when the correct key is close = better interface.
Wallet + Money + cashier + cash register = interface.
If you could do NFC without an app & without a cashier, that would be even better.
There is no such thing as "no interface" if something is being accomplished.
Rock + coconut = interface.
Hand + mouse = interface.
Your eyes + my words = interface.
"Oh, you want us to add a touch responsive display to a refrigerator rather than using mechanical buttons?"
Then they throw the kitchen sink at it. Once you've installed that touch screen and the hardware needed to control it, you might as well add the ability to control ice cube production from a mobile app. Since you're already there, you might as well give them the ability to Tweet that they just pulled a slice of double-chocolate cake out of the fridge.
Just because these components are capable of acting as small computing devices doesn't mean they should be utilized like one.
The entire time I'm reading the article I'm cringing too. I don't want a device that just operates as a universal key for everything I do during my day. I walk up, order a sandwich and they charge my account without any physical transaction happening and no passcode required to open my phone? What's to stop someone else from doing the same thing?
A car that opens it doors and starts its engines because I have a phone in my pocket? Same issue.
Steal someone's phone and you steal the keys to their life then (really easily).
No different than what happens now. When your wallet gets stolen you call the credit card companies to cancel your cards.
Yes, sometimes when you order a sandwich you don't want to pay for it immediately, and sometimes when you get close to your car you don't want it to unlock. But those times are the exceptions. The exceptions are the times that you should deal with a more complex interface. The other 99% of the time it should do the right thing automatically.
Later they added a button to lock/unlock the car because people were not comfortable with the technology:
1. They would get inside their house, drop the key, then get back out to check that the car was locked.
2. You could not just park in front of a shop, because the car could get unlocked while shopping if you got too close. Any activity around the car could end-up in continuous stream of locking/unlocking.
Unfortunately, making sure that nobody can enter your car is one of the primary concern of car user. And if it is not, sooner or later, your car insurance will convince you otherwise.
Someone grabs your phone and gets free meals? What controls are there on that interface?
I don't even have a dongle on my keys to unlock my car remotely. Doesn't bother me in the slightest.
> Cashiers should be asking for ID and checking your signature. It's actually a control that is supposed to occur in stores.
No, it's not. For small purchases, merchants are not required by the credit card companies to check ID or to even ask for a signature.
And for larger purchases, it typically still doesn't happen. My credit card has "ask for photo ID" written on the back instead of a signature. Even with this, I get asked for ID maybe once every two months. Checks that are supposed to happen don't matter. Only checks that actually happen matter.
> Someone grabs your phone and gets free meals? What controls are there on that interface?
Well, if you're using the system Dorsey was describing, your photo pops up every time you go to pay. So the cashier sees it without asking for it. So if I try to pay with your phone, the cashier can say, "I'm sorry, but you don't look much like rbellio. Should I call my manager over?"
Having worked in loss prevention in the past, I know that the intent is to make sure these controls are checked an maintained. The issue you run into with cashiers is that the turn over rate is usually so high, or those checks aren't done frequently enough that the controls become lax.
Why would merchants be so concerned with these controls, you might ask? Because if it can be proven that the charges were made fraudulently, the merchant becomes responsible for them. If someone buys $300 of stuff from a store using your credit card, the store loses that cash.
Speaking of pictures associated with your phone. Where is this picture going to be stored? On the phone? Where, if someone steals it, they could replace it? Should we have a national database then that relates your phone number to an image of you?
Yeah, absolutely. Don't put a GUI as part of my car controls; that's stupid. But the wheel, pedals and shifter are certainly an interface.
If you consciously think that you have to use effort to control your eyes to see. Then, yes, to you your eyes are interface. To many others they probably don't think that way.
1. While standing in line, I searched the app and had it on, but the phone kept turning auto-off till my turn (I re-opened it 3 times atleast)
2. The app would not scan as the screen brightness was low. I frantically went to settings to change brightness then re-trying and all this time there were people standing behind me pissed.
I think using phones as "keys" or "payment cards" is not the best interface. Ideally there should be a separate device (like a credit card) to do payments and a "Key" device to open all my locks.
It is similar how https://lockitron.com/ is advertised to work except you are required to touch the handle.
No UI means that if the phone is in your pocket the door behaves as though it is always unlocked.
Fortunately the Nest's UI is good enough that it's still a good thermostat without the learning mode. There was a comment on HN the other day along the lines of "if you have good enough AI, does UI design quality matter so much?" and I guess I think that it does if there's any way for the AI to mispredict then you need something good for correcting it.
Exactly. This is why I'm in favor of a worldwide shift to hieroglyphics and touchscreens for business. Writing business correspondence is old hat. Letters, words, sentences, parapgraphs, what a nightmare. And shorthand? Don't get me started. Let's face it, we all would much rather touch some graphical shapes on a screen to communicate. A picture says a thousand words, so why are we typing them out? What a waste of effort. Text has got to go. It's time to leave the alphabet soup behind.
Finally a design firm who really "gets it".
Cooper, the Thought Leaders, to the rescue!
Now they have iDrive.. Complete mess!
Too often usability is sacrificed for design. White lights are not the worst in this sense. For example:
Audi has red lights, which, as some people believe, makes aggressive.
VW has blue lights. This is absolutely terrible, as the human eye can not focus blue very well. Extra fuzzy numbers!
The worst thing I ever saw in that regard was a rental: Light gray dashboard. I had to cover it with dark t-shirts, otherwise I wasn't able to see the road!
I have iDrive in my car.
Think about the cheap TV sets with no buttons. A ton of menus just to change the contrast.
Or microwave ovens unusable without manuals.
EDIT - What do I say cheap TV sets? All TV sets.
What you can strive for is interfaces that are so intuitive and easy to use that you don't need to fight them or rtfm.
Great UI designers know how to do this.
This still isn't truly a no-interface situation, though. It's an interface that's so natural that you don't have to think about it. You express your intent by walking toward the door. You're still expressing intent, though. The sensor just does a really good job interpreting that intent and acting on it. But like all interfaces, this one is still imperfect. e.g. Sometimes the door will open up when you're just walking too close. Or sometimes it doesn't open when you expect, presumably because it's poorly calibrated (or maybe you have no soul).
On the other hand, when they are facing someone through the door and want to close it, they usually have to press a little button next to it.
Maybe it could be possible IRL with a face detector and looking at the direction and speed of the person.
When I enter a shop and a ringer alerts the shopkeeper of my presence, I never had to interact with the machine at all - it just sensed me and acted in the background.
That's why Internet of Things will become big. It's not the use case of turning the oven on 20 minutes before coming home yourself. It's about the oven knowing you're eating a prepared lasagna that needs be to in oven for 20 minutes. While driving home all traffic information and your location are used to determine when the oven needs to start its work.
- Find remote or phone
- Navigate to Netflix/Amazon/etc
- Click around miserable arrow keys to find whatever I'm watching
- Press play
Is now reduced to
- "Xbox, Bing [ugh] Star Trek"
- "Play on Amazon"
It clearly still needs some refinement (keywords get a bit verbose), but it's definitely the future of TV (lack of) UIs.
If Apple starts buying tons of directional mics, every TV manufacturer on the planet should be scared to death.
Google search is an e.g.: almost no UI, improves over time, adapts to you.
Awesome, I agree, except that all of those examples are shit.
It's not "more simple" to just walk up to your car and have it magically unlock based on proximity. Simple is using your damn key to unlock the car, not layering stacks of abstractions in order to compute ones location juxtaposed to a vehicle. In fact, that order of events should have gone something like this (as a generic, modern day implementation of this functionality):
- owner approaches car
- owner's keyfob transmits signal to car
- owner's car polls for incoming signal
- owner's car decrypts keyfob signal
- owner's car verifies that the keyfob has a legitimate encrypted key for that vehicle
- owner's vehicle signals the locking routine in the ECU
- owner's ecu flips solenoid for only the drivers side door
- door unlocks
- owner enters vehicle
How the hell is this more simple than:
- owner unlocks door with key
- owner enters vehicle.
Likewise, having your payments automagically charged based on location is NOT more simple. Simple is ordering your food and handing over money at the register.
The best interface is a simple interface, not a whole bunch of programming voodoo to achieve a simple task.
The point is: the best interface is NO interface.
Go to car. Open door. Sit.
Not: Go to car. Find keys. Unlock. Open Door. Sit.
Ordering your food is less simple than ordering your food and handle money? But you do loose a lot of anonymity, which offsets the ease of use.
- owner puts hand in pocket
- owner grabs keys
- owner removes keys from pocket
- owner finds car key
- owner inserts key into lock
- key engages pins in lock
- owner applies torque and turns the key
- pins allow tumbler to turn
- electronics detect turned lock
- lock disengages
- owner turns key back
- owner removes key
- owner puts key in pocket
- owner opens door
- owner enters vehicle
Why do I care about the pins? I don't. They do their job without me telling them what to do.
With menus it's not that simple - I have to engage them; I have to obey them.
If your car door has menus, then yes, that's a huge usability problem, but no one's arguing in favor of that.
My point is that any interface has some abstractions beyond which you get only hacker delight in knowhing their inner workings.
For usability purpose you don't need to know there are pins or bits - just that they work.
When on the other hand, you artificially expose inner workings (menus) and _force_ the user to make note of them - you should not be allowed anywhere near a design table.
Designer's job is to make users life easier not to shout at them "you stupid you who don't know how to exit my maze"
owner unlocks door with key
If the technology does 1 Billion operations on your behalf when you could do it in 10, but now you have to only take one action, then it is simpler to you.
All the above steps are done without the owner having to know about it.
I'm quite sure he is talking human->computer interfaces not computer->computer interfaces.
By your logic, it's simpler to buy something from someone in another country with cash than it is to use PayPal.