
A brief rant on the future of interaction design (2011) - li4ick
http://worrydream.com/#!/ABriefRantOnTheFutureOfInteractionDesign
======
vortico
This is probably my all-time favorite article on the internet. I've brought it
up in conversation dozens of times when referring to ways to control
computers. It's why I don't really use phones for anything beyond answering
phone calls and handling urgent tasks when I'm not at home/office. I just feel
like I have very little control over these devices, unlike the massive control
with a keyboard or mouse, or even better, a fork and spoon when eating dinner
(although despite its high degree of control, you can't control a computer
with them).

Tech innovators and its users/consumers need to think outside of the box more
often for ways to control computers. Given a physical UI device, you can place
it on a spectrum of "computer-friendly to human-friendly". Touch screens are
computer-friendly because it's trivial for computers to understand and
developers to interact with. The downside is that users feel distanced from
their control, so it is difficult to use efficiently. But developers are
smarter than users at UI---the difficulty should be put on them instead, to
control their software/OS.

I've imagined things like stress balls with "electric muscles" for feedback to
control various applications, a cube with the ability to change its texture
with suction/servos, and things like
[https://roli.com/products/seaboard](https://roli.com/products/seaboard) or
[https://www.expressivee.com/buy-touche](https://www.expressivee.com/buy-
touche) except for general computing applications rather than music
performance. Who knows what could be invented if customers could be convinced
their computing experience can be improved with better physical interfaces.

~~~
atoav
As a fan/user/creator of modular synthesizer stuff and design observer in
general I can usure you that you are quite definitly not the only person that
feels like this. Vinyl record sales have surpased CD for the first time again
not long ago, people crave for physical interfaces, modular synths and
everything hardware is getting more popular despite the ton of plugins you can
get for your music software. People are going back and build physical
interfaces for menu-diving-synthesizers from the 80s like Yamaha’s infamously
unprogrammable but amazing sounding DX7.

People are rediscovering mechanical keyboards etc.

Modular synthesizers are a wonderful playground for physical interface design.
Because of course you want to expose a lot of functionality of your module,
but on the other hand your panel will get big and your module will end up
taking up a lot of space. Of course you can get away with using a screen and
an encoder but hoe wrll can you use it live while making muic then? Of course
you can add 20 dials, but maybe 3 clever ones that abstract over those 10 in
just the right ways are better? Also: people might not use your module for a
month, how can the design remain in their head just enough to kick a spark?

Physical interfaces. Use them blindly, shile wearing gloves, in your pocket.

~~~
vortico
A have a few thousand HP of Eurorack and definitely agree that modular synths
are easier, more fun, and more interactive to control than most other
interfaces. But even then, they're clunky for some things, and I feel that
there are _further_ designs that can achieve control even more efficiently.

~~~
atoav
I agree. There is even more possible and not all modules are really well
designed (e.g. why did Dieter Doepfer feel the need to swap around the
otherwise identical inputs between his SEM- and Wasp-Filter?). Other modules
just don’t go far enough. many lack attunoators/attenuverters (for economical
and space reasons).

You have to really fight to get a decent physical interface, to get a good one
takes even more.

I believe the future of modular interface design lies especially in more
visual feedback which makes usage easier and more intuitive and systems more
reliable during live usage. There we can learn from modular and semimodular
synths from the software side.

------
Chris_Newton
It’s sad that since this article was first written, the trend for dumbing down
both the hardware and software we use every day has only continued.

When I was a kid, I used to love experimenting with programming. I first
learned on a ZX81, with a magnificent 1KB of RAM, typing in simple games
programs from books every session because there was no storage device to save
them. That experience, that joy of being able to _create_ something fun,
sparked an interest in what these amazing technologies we have invented can
do.

Some of my slightly younger friends were lucky enough to have more powerful
systems like the BBC Micro available by the time they reached that stage.
Those were brilliant because you could connect _anything_ to them. When they
were writing simple LOGO programs at school, they didn’t just draw a circle on
the screen, an actual mechanical turtle with an actual pen would draw a circle
on a real piece of paper, right before their eyes.

Now I want to offer that same joy and intrigue to the next generation of my
family. Today’s ubiquitous computing devices are phones and tablets, each with
numerical specs many orders of magnitude bigger than that ZX81. That little
boy typing in listings from a book now has multiple decades of professional
programming experience to share.

And yet, I can’t sit down with my own child and write even the most simple
game on those devices, because for all their theoretical power, they lack even
a rudimentary programming interface. In some cases, I can’t even write a game
myself on another system and port it, because the whole ecosystem is closed
off.

How is it that in a time when children seem, often all too soon, to be
carrying around more processing power in their pockets than a supercomputer
had when I grew up, they still can’t enjoy the sense of freedom and discovery
that I experienced with my little ZX81 and its 1KB of RAM and no storage
device in the 1980s?

~~~
fragmede
They can!

If you're on iOS, have you tried Pythonista? It's $8 on the app store, uses
Python as the programming language, features an IDE, and a UI library so
budding programmers can write video games.

For those on Android and the technically minded, a rooted Android tablet
offers far more options for on-device programming. There's likely an
equivalent to Pythonista in the Google app store as well.

(No affiliation; I'm sure there are other apps that are similar, that's just
the one I know of.)

~~~
dragonwriter
You don't need to root Android (or even go outside the app store) to have a
wide range of on-device programming options (you might need to engage
developer mode for many of them, but you can do that without root.)

~~~
fragmede
I'm sure you're right, but I've moved away from Android. Are there any apps in
the Google store that you know of, or would personally recommend?

~~~
dragonwriter
Well, RFO-BASIC! is an interesting modernized BASIC with relatively full
integration with phone features that is available on the store (you have to
get the non-Store version for SMS features, thought).

It's the thing I've seen that off the top of my head seems most related to
your comment.

------
Animats
Victor is one of the people behind Dynamicland, the "live paper"/"room is the
computer" startup that was shut down recently. That was on HN last week or so.

Fancier input devices seem to have come and gone. They peaked in the 1990s,
when you could see a lot of them at SIGGRAPH. My favorite was the magnetically
levitated sphere in a bowl. It was like a 3D joystick/trackball with force
feedback. It was really cool. It never sold. There were lots of gadgets like
that. An animator friend had a workstation with a keyboard, a knob box, a
button box, a joystick, a trackball, a tablet, and two screens. Some of the
people doing Jurassic Park had a model dinosaur where you could move all the
joints, the computer could read the angles, and the on-screen image moved to
match. None of this ever caught on. Even 3D joysticks are rare. Game
controllers with two joysticks caught on, but those joysticks are abstractions
of what's on screen, as, for example, steering, not direct interaction.

I tried Jaron Lanier's original gloves-and-goggles VR system. You couldn't do
much with the gloves. That was pretty much true in later glove systems.
Autodesk fooled around with VR in the 1990s, but determined that gloves and
goggles were not going to make CAD easier.

Lack of force feedback is a huge problem with gloves. Without force feedback,
it's like trying to work in oven mittens. Much of human precision is tactile
feedback. Without that, precision work is slow and tiring. As everyone who's
soldered surface mount parts under a microscope knows.

Back in the 1990s, when I was working on collision detection and physically
based animation, I considered building an input device I called "The Handle".
The Handle was to be a jointed arm, like a robot arm, with a grip handle on
the end as an input device. A servomotor system (or, for cost reasons, I was
thinking brakes only back then) would provide tactile feedback. The handle
itself would have the ability to split, like pliers, so you'd have something
to squeeze.

The Hammer could potentially simulate pliers, tongs, wrenches, hammers, etc.
Do simulated auto repair. Assemble Meccano. This would have been what Victor
is calling for.

Could it be built? Yes. Would it sell in volume? No.

That's the problem.

~~~
Nition
I tried out the 3D Systems Touch Haptic Device a few years ago, which sounds a
bit like a simpler variant of your "handle".[1] It was quite good. Feels
really strange, but good, to be interacting with things "inside the screen".

[1] [https://www.matterhackers.com/store/l/3d-systems-touch-
hapti...](https://www.matterhackers.com/store/l/3d-systems-touch-haptic-
device/sk/M12PCJ5U)

~~~
Animats
That's one of the very few devices in that space ever to be produced in
volume. Small volume, but a real product.

------
guitarnick
Though this has been discussed many times, I must say the trend of using
touchscreens to replace dials and buttons on cars is extremely worrying. It’s
entirely an aesthetic and cost saving decision at the expense of safety and
UX.

~~~
arkh
> I must say the trend of using touchscreens to replace dials and buttons on
> cars is extremely worrying

Most are going back on that. It is just that what is currently being sold was
developped years ago.

~~~
reaperducer
I'm lucky that my car was made in the in between period, so all touch screen
controls have physical controls as well.

I use the physical controls, especially when driving. But my navigatrix
prefers the screen, primarily because she likes the visual feedback. (Not
enough of the physical controls in my car have things that light up to let you
know something's happened.)

------
neilobremski
This brings to mind the phone ringing in the middle of the night. You pick it
up and add mumble into the receiver and the conversation begins. But no, now
the phone rings in the middle of the night (assuming it isn't text / Slack /
email / etc.) and you pick it up and then try to focus your eyes on the screen
to figure out WHAT is happening and then HOW to answer. It seems the massive
leap forward in the CAPABILITY of our phones has actually caused them to
stumble backwards in PRACTICAL usability.

~~~
rubidium
So, has anyone made a “old school smartphone dock”?

What I want is a dock I can put my smartphone into and then the old phone
interface takes over. Preferably one with links to 1-2 other docks in the
house so all three ring if I get a call.

Pick up the physical receiver, say hello. Talk. Say goodbye, hangup. No screen
interaction at all.

~~~
bblough
The cordless phone system I have lets you make/receive calls via your cell
phone from any handset in the house. It connects via Bluetooth, so isn't a
dock, but the functionality is similar to what you're suggesting.

Mine is an older Panasonic system, but I'd be surprised if other manufacturers
didn't offer something similar.

~~~
rubidium
Like this? [https://shop.panasonic.com/cordless-corded-
telephones/cordle...](https://shop.panasonic.com/cordless-corded-
telephones/cordless-telephones/KX-TG7875S.html)

~~~
bblough
Yes, mine is very similar.

Though, (and I should have mentioned this in my original comment, sorry) I've
never actually used the functionality in question, so I don't know how well it
actually works.

------
tboyd47
Instead of a single spectrum from computer- to human-friendliness, picture a
triangle with the third vertex being "task-friendliness" or task specificity.
The author is advocating a return to human-friendliness, but what he describes
is more like task-friendliness.

I've become somewhat obsessed with ergonomics after a long battle with chair
pain. In my mind, true human-friendliness does not require new interfaces for
every task (although it may require a slightly different interface for each
_human_ ). Every computing task may very well be handled best by a person
sitting in a chair at a workstation. That wouldn't be bad design IMO;
computers are, in fact, general purpose.

If we can get to a point where each computer user is able to use their
computer comfortably for 8 hours straight, without pinching a nerve or a blood
vessel (yes! it's possible!), then I would consider that a human-friendly
design.

I'm not talking about iPads here - the author is totally right that these
devices don't seem to be made for human beings.

~~~
jacobolus
As far as I understand, remaining in a static position for 8 hours straight
during the day and another 8 hours straight while asleep is pretty bad for
many parts of the body irrespective of what that position happens to be.

People need to be occasionally changing position, moving about, getting their
circulatory system working, getting their muscles working, focusing their eyes
at a longer distance, etc.

We can improve people’s experience with sit–stand desks, frequent short
breaks, periodic longer breaks, a daily commute requiring walking or other
exercise, etc., but reducing the total number of hours people are staring at
screens every day would also be generally helpful. It isn’t all that great to
exclusively work at even the most ergonomic possible computer workstation.

~~~
tboyd47
There's truth to what you're saying, but it implies you have to accept a
certain level of pain or physical discomfort if you work in a seated position,
and the only way to manage the pain is to get out of your seat. But there are
types of pain that can't be managed that way. Those injuries have to do more
with sustained muscle tension caused by bad equipment than just sitting, and
will continue to get worse until you change your equipment to eliminate the
tension.

------
gdubs
A few things come to mind. First, I've been reading a lot on the science
behind Montessori education, and there's a lot to support the idea that our
brains are much more active when we're physically interacting with something.
Would be logical to assume that the richer the physical interaction, the more
meaningful the connection.

Second, reading Jaron Lanier's somewhat recent book on VR I was struck by how
far ahead of its time the 'data glove' was. Early VR may not have had the
dazzling eye-candy of today's graphics, but on interactivity we just seem to
be catching up. (As an aside, I never realized the Nintendo PowerGlove was
basically Data Glove 'lite'; developed by VPL.)

Edit: Third, I recently read Carl Sagan's "The Dragons of Eden" which he wrote
in the 1970s. It's about the evolution of the human mind. It's SO worth
reading today, even if its outdated and proven wrong at times by contemporary
discoveries. One thing that really stood out was this idea that it wasn't so
much that humans evolved and then invented tools – but that perhaps tools
shaped our evolution as much as we shaped tools. Invent a simple tool -> use
the tool -> brain develops as a result -> invent even more complex tools ->
etc.

------
dang
A thread from 2013:
[https://news.ycombinator.com/item?id=6325996](https://news.ycombinator.com/item?id=6325996)

Discussed at the time:
[https://news.ycombinator.com/item?id=3212949](https://news.ycombinator.com/item?id=3212949)

------
modernerd
> With an entire body at your command, do you seriously think the Future Of
> Interaction should be a single finger?

What if you don't have an entire body at your command?

The plus side of “pictures behind glass” is that it's a fairly well
standardized interface — the software has to adapt to that homogenized
hardware, instead of you fighting to adapt to the hardware. (Or, more
accurately, you adapt to it once, instead of once for every type of object you
want to interact with.)

If interactive experiences all start to require a good range of motion, bodily
sensitivity, and ability to instinctively interpret the interface, there's a
risk it could be incredibly alienating for many. Unless we design with that
consideration from day one, it could make adaptivity harder than it already
is.

I went looking for Bret Victor's take on this question because I was certain
he would have thought of this already. So far I only found this:

“Channeling all interaction through a single finger is like restricting all
literature to Dr Seuss's vocabulary. Yes, it's much more accessible, both to
children and to a small set of disabled adults. But a fully-functioning adult
human being deserves so much more.”

[http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...](http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign/responses.html)

I find it sad to read that. Those with access issues deserve so much more too.
There's already a huge access gap. If we're going to champion new modes of
interaction, we should fight hard not to make the gap bigger still.

~~~
TeMPOraL
This may be controversial view, but I'm with Bret Victor on this. Of course,
we should do as much as possible to make everything accessible to everyone.
But we should _not_ strive for a single universal interface that's accessible
to all, because that's sacrificing both the best and the average case for the
worst case.

Consider books and vision loss. The tried-and-true solution to make books
accessible is printing them in Braille. But nobody in their right mind would
be suggesting we should only ever print books in Braille, because then it's
equally accessible. It would be more "fair", but it would also be a ridiculous
waste for the 99.5% of the world's population[0].

If we were to equalize all our technology and processes across all
accessibility issues, our civilization would collapse - it would mean nothing
depending on body motion, on body sensitivity, sight, sound, speech, taste,
pain. The union of all existing disabilities is a life of a rock.

So since we can't really have a truly universal interface[1], we may as well
give up on trying to design technology to the lowest common denominator,
losing most of the benefits it gives, and instead design it to play to the
strengths of human body. Fallback options should be available where possible,
but at some point we have to understand that until medical technology
homogenizes our bodies, not everyone will be equally suitable to every task.

(And I say that as someone who has always dreamed of being a pilot or an
astronaut, but for whom that career path was cut off by large nearsightedness
early in teenage years.)

\--

[0] - "36 million people are blind" according to [https://www.who.int/news-
room/fact-sheets/detail/blindness-a...](https://www.who.int/news-room/fact-
sheets/detail/blindness-and-visual-impairment)

[1] - At least not until a proper brain-computer interface exists, and we all
interact with everything using our thoughts.

~~~
modernerd
Thank you for this perspective. You changed my mind about this.

------
_bxg1
There are some gaps in reasoning, but overall I think the point stands.

The biggest omission is the key advantage of "Pictures Under Glass": they can
be defined purely in software. Because of that, I don't think they'll ever
stop serving a role in our device interactions. No amount of space-age
physical device design could've supported the smartphone's explosion of apps
without pictures under glass or some equivalent.

With that said, we could definitely benefit from moving some of the more
constant/widespread interactions back to physical controls. Buttons and
switches on smartphones are always appreciated. And don't even get me started
on cars, where there's little need for a vibrant developer ecosystem and a
_ton_ of need for non-visual interface comprehension. I also think the Apple
Watch's "crown" is one of the more interesting recent examples of a tactile
interface that doesn't sacrifice open-ended software development.

~~~
tabtab
Re: _Buttons and switches on smartphones are always appreciated._

I disagree, I'm always accidentally bumping or pushing them. Plus, they
usually do tasks that are not my top tasks. I'd like to reassign some of them
to my favorite actions.

------
arkh
I'd like to extend this to fucking VR. All VR experiences lack one thing: full
body haptics. Take something inspired from the "Pacific Rim" cockpits (
[http://i.imgur.com/DLze6j7.jpg](http://i.imgur.com/DLze6j7.jpg) ) so you can
have some contraption giving you a huge freedom of movement and haptic
feedback everywhere. That'd make some games a lot more immersive and physical.

~~~
erikpukinskis
The reason games don’t feel physical is not because of which senses are being
stimulated, it’s how they are stimulated.

If the game is just waiting for your hand to enter a bounding box, that’s a
0-dimensional interaction, which feels like nothing. If you could hear your
fingers getting close to things, and see the angles between your body and
other surfaces, sense through visual texture the relationship your body has
with different fields, you would register those things a “physical”
experiences.

Touch doesn’t feel like touch because it’s your fingers, it feels like touch
because it’s rich and continuous and textured and perfectly grounded in body-
space.

You can make proprioaudiovisual experiences like that too, it’s just hard and
there are easier ways to make video games that make money.

The science is clear that our brains don’t care which sense data comes from,
they just care about the structure of the data.

Although there are anatomical structures that make some senses better at
processing one structure than another, they don’t prevent the brain from using
those senses for other schemas, if the environment demands it (injury,
prosthetics, etc)

------
cvaidya1986
This is great! Really shows what we have today is a reductionist approach to
only using our finger on a 2D surface as opposed to utilizing all our motions,
muscles and senses to interact with technology in 3D space.

~~~
zdragnar
Personally, I think it's a bit asinine.

Rather than decry anything that resembles what we have today as old-fashioned
and not futuristic, it should first be asked _" why are screens so common
today?"_

We've been using our hands for years- keyboards, trackballs, mice, joysticks,
custom wheels for driving games, the nintendo light gun for duck hunt, and so
forth. Take that- and combine it with something else that should be jumping
out at you from the article: _every picture with a hand shows it holding a
different object_.

Screens are multi-purpose. They are perfect for scenarios where one control
must meet many needs (compared to the pictured hammer, which at best meets
two). Let's say that we could imagine some multi-purpose, tactile, 3
dimensional interface... say, like a hologram from Iron Man or Star Trek. Now
compare that to the article's mention of Alan Kay's doodle pre-integrated
circuit. The difference is huge... we don't even have the physics knowledge to
describe how a tactile hologram could exist at safe handling energies, let
alone in open air, as described by those fictions. Comparatively, Alan Kay
combined things that already existed... battery, circuits, screen and keys,
and imagined what could happen if they were smaller and portable.

We had things much more similar to what he drew decades before we had the
ipad; the ipad is merely an incremental improvement over some of my own
childhood toys, not to mention what was already available a few years after
his own drawing, such as the Speak and Spell (
[https://en.wikipedia.org/wiki/Speak_%26_Spell_(toy)#/media/F...](https://en.wikipedia.org/wiki/Speak_%26_Spell_\(toy\)#/media/File:Speak_&_Spell_\(original_style\).jpg)
)

~~~
p_l
The most honest answer?

Because putting a damned capacitice touchscreen is cheaper than even one good
button.

I spent non-trivial amount of time to find devices that have tactile, physical
interfaces. I certainly paid a premium in all cases. Because putting a
capacitive layer and an LCD is god damn cheaper, for the reasons you describe.

It's not that the screens are better, or even good. It's that they are so
versatile that they are the cheapest, lowest common denominator option for
interaction design.

In a sense, Star Trek predicted this, as at least part of the reasoning behind
flat panel touchscreens everywhere was that it was very, very cheap prop to
make compared to complex physical controls.

The touchscreen is the natural evolution of unending drop down menus that
require several trips through every time you do some operation, because the
application development had no time for UX research.

~~~
nwienert
This just isn't right. It's not the cheapness, by far. The iPhone didn't
revolutionize the world because it's cheaper - it did in spite of it.

It revolutionized the world because a big screen means:

\- More content you can view at once

\- More flexible and powerful apps that can have more, more intuitive controls
and display more information

\- Touching an app directly and interacting with it directly means you can
manipulate apps far more intuitively, quickly, with direct feedback (remember
scrolling webpages before touch with tiny ball wheels or up/down arrows? yea,
it sucks)

Resistive touch screen sucked, but capacitive were more expensive and were the
game changer. I think you have it backwards.

~~~
TeMPOraL
iPhone "revolutionized" the world[0] because it gave people a portable
computer that combined multiple separate devices into one, with (added
retroactively) possibility for extending it with further capabilities.

This justifies _a_ smartphone in its current form. It does not justify why
your car stereo is operated by a touch screen.

The truth to p_l's comment is plainly evident if you look at household
appliances of today. You'll note that ovens and washing machines don't even
have touch screens. They have touch panels with fixed image underneath (same
with pre-CGI Star Trek interfaces). They're an UX nightmare, but are used
precisely because capacitive touch detection is very cheap (doubly so at low
fidelity required in these cases) - it's just a plastic/metal sandwich with
zero moving parts.

\--

[0] - iPhone is the one that's most remembered, but not the only phone in that
space in its era.

~~~
nwienert
Sure, I was just responding to the parent post though. As for a touchscreen in
your car, I expect people have different preferences there.

~~~
p_l
A big problem is that you don't really choose a car based on whether it has
touchscreen or not. And then there's the aspect of "distracted by the shiny"
which has to be tempered by having experience with touchscreens

------
badsectoracula
Back when touchscreens started to become popular i remember some talks about a
'tactile overlay' to touchscreens, essentially a grid of tiny bumps (pixels,
though at a much lower resolution) that would 'pump up' and 'flatten' just
enough to provide a tactile feel to the otherwise glassy screens.

What happened to that?

------
melling
I almost made a comment in this article about using Deep Learning and Motion
Tracking to have computers better understand our gestures so we can interact
visually.

[https://news.ycombinator.com/item?id=21115863](https://news.ycombinator.com/item?id=21115863)

At some point AR and VR will finally provide us the overwhelming need. Then
we’ll wonder why we didn’t think of it sooner.

At the moment, we’re stuck in the “a keyboard and mouse are better” stage.

~~~
weego
You can't just drop the idea of Deep Learning into a problem as a magical
solution.

What does visual interaction entail and what is stopping that from having
already been trialled during one of the previous (and current) attempts to get
vr to work for people?

~~~
melling
If you can use Deep Learning and CNN’s to drive a car, you can probably use it
to recognize different hand motions.

[Update]

I found a link where someone tried to do a simplified version of this:

[https://towardsdatascience.com/tutorial-using-deep-
learning-...](https://towardsdatascience.com/tutorial-using-deep-learning-and-
cnns-to-make-a-hand-gesture-recognition-model-371770b63a51)

~~~
mkl
It's certainly possible to recognise hand motions (see e.g. Leap Motion), but
I think hand motion recognition is pretty hard to use effectively without
haptic feedback telling you what you're touching and where. Ideally actuated
like [1] but for your whole hand, not just a stylus. I don't think technology
is anywhere near that. The device in [1] pushes back against you, physically
stopping you from moving through virtual objects.

[1] [https://www.3dsystems.com/haptics-
devices/touch](https://www.3dsystems.com/haptics-devices/touch)

~~~
erikpukinskis
You don’t need haptic feedback, just good feedback. There’s plenty of bits in
an audio stream alone to give the brain the control feedback it needs. You’d
be surprised how few degrees of freedom we use for grasping.

------
bsanr2
I noticed this while experimenting with VR and hand-tracking. Without tactile
or precise auditory feedback, the experience was hollow, even with relatively
high-fidelity visuals and tracking. You can never feel like you are somewhere
else - or something immaterial is with you - until a full sensory experience
coalesces, even if it's "low resolution." Until then, the result is landing in
an experiential uncanny valley.

On another note, we should also be looking at accessibility when approaching
this research. Contrary to that last little bit at the end, not everyone has
full use of their bodies, or even of their hands. I just finished listening to
an NPR segment about inner-city youth dealing with permanent disability as a
result of gun violence, and the unique challenges presented to them. Perhaps
with people like them in mind, the mass solutions we eventually end up with
will be better for focusing farther upstream than nerve endings.

~~~
Wowfunhappy
> You can never feel like you are somewhere else - or something immaterial is
> with you - until a full sensory experience coalesces, even if it's "low
> resolution."

I'm not sure I agree with that. Games like Beat Saber and the old demo of
Budget Cuts have really made me feel like I'm inside another world,
particularly when I was newer to VR.

I say "not sure" because we're not necessarily talking about the same
thing—the above are "games" or at least "experiences", not "interfaces. You
don't really touch anything in Beat Saber, except for the blocks where a
strong rumble is sufficient. Harder to do that in a UI.

~~~
seanmcdirmid
In a fitness experience, you can imagine the resistance at least, which is
what I wind up doing in beat saber to increase my heart rate and calorie burn.
It is actually an interesting problem: can the mind imagine the feedback that
really isn’t there, causing muscles to work harder even if they don’t have to?
For some people at least, the answer is yes, though I bet it would be hard for
others. But shadow boxing, katas, and so on, have shown that imagining
resistance has been around for awhile.

We can kind of imagine tactile feedback already in a UI, and vibration is
effective in reinforcing that illusion. Perhaps the way forward is to further
play around with fooling the brain (via some kind of cognitive interface)
rather than somehow reproducing real feedback.

~~~
TeMPOraL
Vibration is key in current VR experience, IMO. Having spent many hours on
Beat Saber over the past few weeks, I can say that the only reason my brain
recognizes that I hit something with the blade is the little controller buzz
that activates at the right moment.

As for adapting yourself to the game, I keep having to focus on preventing
myself from using my wrists too much when playing. Given that my goal is
calorie burn, I do my best to use my whole arm (as if wielding a real blade).
I've noticed that I do create "impact resistance" myself too.

~~~
seanmcdirmid
It took awhile to stop us g wrists, but now I imagine it as some immensely
satisfying glaive work.

BoxVR is another game where imagining feedback is crucial, more so actually
with their fitness focus. Their inaccurate calorie counter is completely
motion base, so wider faster movements will drive that up as well. Honestly,
BoxVR (custom with lots of fast songs) is now more important than BeatSaber to
my daily workout, because I’m able to get a lot more effective motions in. On
the other hand, I can’t play it with the toddler around (if he got in my way,
he could really be hurt by the gestures I’m doing).

------
dreamcompiler
This is why I dislike touch screens in cars so much: You have to look at them.
We have 100 years of experience with car user interfaces that don't require
you to take your eyes off the road, but we threw all that away in favor of
these shitty pictures under glass.

------
gitgud
I feel my Nokia 3210 in highschool had a much more intuitive UI than the
latest iPhone... and it was trivial to use without looking at it, since your
hands could navigate the keys...

This isn't possible anymore with _sliding pictures under glass_...

------
simulate
I remember this Bret Victor piece from 2011 and I was thinking about it
recently when Google announced Soli for the Pixel 4.
[https://www.cnet.com/news/project-soli-is-the-secret-star-
of...](https://www.cnet.com/news/project-soli-is-the-secret-star-of-googles-
pixel-4-self-leak/)

Soli and other gesture-based technologies seem like at least an incremental
move to using motion that Bret Victor was advocating for in 2011.

~~~
Palomides
to me that seems like the same problem, since you still don't touch anything,
the only feedback is the computer doing something (or not)

that said, I'm definitely curious to see how it performs in the wild

------
syrrim
I think maybe the principle is that we want to be able to transfer as many
bits/second of data as possible. We have the computer->human interface down,
because screens can already emit more information than humans can process. But
human->computer we can only go so fast. This is partly limited by how many
fingers we use to do the input. I can type faster on a keyboard than a screen.
But I think the main issue people are worried about is that humans can only
think so fast. So if the interface requires asking humans for input too much,
it will take forever. So instead you try and guess what the humans want. And
if you do this well enough, then the input method doesn't matter. Some people
say they can type better on their phones because they can use swiftkey. So I
think most visions of the future are thinking about AI, and intelligence, and
asking "what will we be able to guess without asking the human first?" And
this is where they think interaction design is going.

Another point of fact. Such a machine requires less bps from the human. That
is, it requires them to do less thinking, which most people will prefer. And
if you're trying to predict the future, "machines will require less thinking
(but be more mass market)" is a pretty safe bet.

Sandwiches are very fancy things, as everyone knows. But I expend a lot of
mental effort getting the cheese slices in the right thickness. I'm sitting
there focusing on making sure I don't deflect the knife too deep or too
shallow. This is a very fancy task. But many people prefer specialized cheese
slicing devices, because they makes more precise slices, and because they
require less mental effort. So expect UI to follow the same trend, from
general tools requiring complex input to particular tools requiring simple
input.

------
keenmaster
Haptic, resistive gloves will merge the tactile with the digital in VR
applications. Just like touchscreens, they will serve as a single device that
can simulate an infinite number of different controls. Give 5-10 years for the
tech to miniaturize and mature. [https://haptx.com/](https://haptx.com/)

------
JesseAldridge
Embodied cognition is definitely a thing:
[https://en.wikipedia.org/wiki/Embodied_cognition](https://en.wikipedia.org/wiki/Embodied_cognition)

And in software development, the primary bottleneck is thinking. Anything you
can do to enable greater clarity of thought is going to have huge leverage. So
I think this is a super promising angle.

Bret's actively developing his ideas at DynamicLand:
[http://dynamicland.org](http://dynamicland.org) But clearly it will be quite
some time before that tech becomes viable in the real world.

Now I'm wondering if there are simpler ways to use the rest of my body to
facilitate thinking. Like, I can't mount lasers and network connected cameras
on the ceiling... but maybe there's some software that can use a webcam to do
simple gesture recognition mapped to bash scripts or something like that.

------
pgodzin
The benefit is of "Pictures Under Glass" is the flexibility - it is extremely
easy to change the picture to align to almost any use case. It is extremely
difficult to have the same flexible interface with something that is tactile
and can manipulate its own shape to be appropriate for the task.

~~~
icebraining
In 1968 the iPad also seemed extremely difficult, though.

------
justinmchase
> Are we really going to accept an Interface Of The Future that is less
> expressive than a sandwich?

Great line.

~~~
tabtab
Probably written by a hungry author.

------
lkrubner
The article focuses on the use of hands, but a visionary future technology
might focus more on voice and NLP. This is something I have worked for some
years:

[http://www.smashcompany.com/technology/the-advantages-of-
a-n...](http://www.smashcompany.com/technology/the-advantages-of-a-natural-
language-processing-interface)

------
tabtab
I predict the UI of the future will be via brain implants that allow one to
control an app with just thoughts. It may become a necessity in order to keep
up with other countries that may have less qualms about the ethical side of
surgery and intrusion.

------
cryptozeus
I think we are in transition faze. Current future ideas assumes that we will
have same interface and hand driven intersections which may not be true.
Increase of IOT devices and voice controlled interfaces have different story
to tell.

------
tambourine_man
2011

------
jct3u
The video anchoring the article shows as unavailable for me. Is there an
alternate place to view it?

~~~
lancebeet
Same here, I googled the url of the embedded video and it seems to be a
concept video by Corning:

[https://www.youtube.com/watch?v=6Cf7IL_eZ38](https://www.youtube.com/watch?v=6Cf7IL_eZ38)

------
naiveai
The condescending tone of this article does not help it whatsoever.

Not to mention the huge gaps in logic that cause it to jump from merely a
misguided attempt to rail against well established conventions for interfaces
to an active argument for those interfaces still being good.

------
slotkin
ahh was really hoping this was something new from Bret

------
dmix
Aw, I was hoping it was a new essay by Bret. Needs (2011) in the title.

~~~
Already__Taken
Brett victors talk innovating on
principal([https://vimeo.com/36579366](https://vimeo.com/36579366)) and more
so actually Greg Wilsons talk from the same conf
([https://vimeo.com/9270320](https://vimeo.com/9270320)) marked a pivotal
moment for me in my work ethic. It really motivated me to expect better from
the tools i use around every day.

~~~
chrisweekly
Similar rxn to those talks. Also, if ever someone knew how to write an
"evergreen" post (ie, one that holds up over time), it's Bret Victor.

His "Ladder of Abstractions" is a great example:

[http://worrydream.com/#!2/LadderOfAbstraction](http://worrydream.com/#!2/LadderOfAbstraction)

~~~
kragen
I really enjoy his writing, but I think he has a way to go before he's in the
same league as Homer, Plato, Lao Tse, and Shakespeare!

~~~
chrisweekly
haha, those guys sure knew how to write a good blog post!

~~~
kragen
Yup! They got people still clicking on their posts hundreds or even thousands
of years later

------
fragmede
There's a (2011) missing from the title here, at least according to the title
graphic.

~~~
dang
Added now!

------
carapace
Tiny grey sans-serif font means you hate my eyes.

Extra-ironic on a rant about interaction design.

------
galonk
Seems pretty easy to write an essay saying "screens are old news, let's use
our hands!" containing not a single word about how that would work,
technically or design-wise.

I'm going to write an essay saying "screens are old news, let's use free-
floating holograms like in Iron Man!" with no indication how that would even
be possible, and see if I can get on HN.

~~~
grumpy8
The author of the essay is the founder of
[https://dynamicland.org](https://dynamicland.org) which implements what he's
talking about in this article.

~~~
galonk
None of the things BV has worked on -- smart tiles or putting pieces of paper
under a camera/projector -- do ANYTHING AT ALL like what he's talking about
here. They don't change their size, shape, weight, feel etc. in response to
data. Because that would be incredibly hard. I don't think you should get
points for saying "we should do this impossible thing, that would be better."

I must admit I find BV's writing to be really facile... like he'll compare a
piano app on the iPad with a real piano and shit on the entire idea of the
iPad... without addressing the obvious point that an iPad can be a million
things other than a piano.

~~~
icebraining
His point is not about what we should do, it's about what we should _aspire_
to. He's criticizing a video showing a Vision Of The Future, not the current
status.

> shit on the entire idea of the iPad

That's not at all what he's doing. He's saying the iPad was the Vision of the
Future in _1968_. Now the iPad is here. It's no longer part of the Future, and
we should stop talking about it as if it is, because we need to reach beyond
it.

> I don't think you should get points for saying "we should do this impossible
> thing, that would be better."

When the other contenders at visionaries are saying "we should do this thing
that has already been done", saying "we should do this impossible thing" _is_
worthwhile.

