Hacker News new | past | comments | ask | show | jobs | submit login
A brief rant on the future of interaction design (2011) (worrydream.com)
238 points by picture on May 27, 2022 | hide | past | favorite | 132 comments



I'm a mechanical engineer and my favorite way to interact with my computer is with my SpaceMouse [1], a sort of joy-stick that you can pull or rotate in any direction to drag, spin, and scale 3D models. You can also use it to fly around Excel spreadsheets or scroll through web pages.

One thing that I love about it is the grip I can use - it's the "precision" grip that the author discusses in the article, and it gives me a lot of fine control over what I'm doing.

The other day, I saw the haptic smart knob on Hackaday [2]. It's a force feedback rotating knob with software defined detents and boundaries.

If someone could combine the SpaceMouse with the smart knob, I think the resulting multiple DOF force feedback controller might just be the input device of the future.

[1]: https://3dconnexion.com/uk/product/spacemouse-compact/ [2]: https://hackaday.com/2022/03/13/haptic-smart-knob-does-sever...


Way back in 1996 at Siggraph, I tried out a "haptic pen". It was a little pen on the end of an articulated robot arm. You held it like a normal pen and moved it around in 3D space. The robot arm would give you force feedback based on where the tip was in space. It was strong enough to completely stop the pen.

In the demo they had set up, there was a 3D model of a car, and you slide the tip of the pen along the surface. The haptic feedback would stop the pen from penetrating into the model so it really felt like the tip was touching the volume of a solid.

Super cool experience.


Sensable Technologies, an MIT spinout, developed the technology. The company was acquired by 3D Systems (https://www.3dsystems.com/haptics)

Note that Thomas Massie, US representative and climate change denialist, was the founder of Senseable.


Ah, cool. This looks like a related paper:

https://static.aminer.org/pdf/PDF/000/647/434/a_framework_fo...

The conclusion reminded me of an interesting point they mentioned during the demo: the haptic feedback ran at 1,000 FPS! We have much finer-grained expectations around time for touch than we do sight.


"climate change denialist" is such a disturbing and dismaying characteristic for someone who should know better


This sounds like it inspired a video game controller called the Novint Falcon. Swappable grip on the end of 3 arms that could detect up/down/left/right/forward/back movement and also had decently beefy motors to give you that sort of feedback. I was super impressed by the demo software that has a part where you moved a virtual hand through various substances. Honey and molasses really felt like you were moving your hand through viscous liquid, you could feel the texture of a bumpy rock but not move through it, and most impressive to me was a sphere of ice that really felt like your hand easily slid along the surface.

It got first party support in a bunch of Valve games and a decent number of games had unofficial support through mods but it never really took off and I think the company now makes medical tools with the same sort of tech.

The lack of feedback in input devices beyond a little rumble is really disappointing. I think feedback is extremely important especially now that VR is starting to actually take off.


Feedback is just hard to do. There are safety concerns (anything with powerful enough feedback is strong enough to hurt you if it malfunction). It’s also pretty expensive because you need a bunch of servos and force transducers which add up as you add DOF.


Sounds like the perfect sculpting experience if you combine it with modern VR.


Yeah, 3D modeling was the use case they were demoing it for. It was really impressive.


Was this pen on an arm later commercialized as the Phantom Omni?


Sounds like it. I worked for a few months building demos for the "SensAble Phantom" (not Omni).

It was limited in that it gave force-feedback only against the sphere at the tip. The orientation of the pen could not be restricted. We attempted to simulate feedback on more DOF through leverage, but it was lacking: like using a mouse to simulate a steering wheel in a driving game.


I have no idea. This was a quarter century ago. :D


I'm curious as to whether or not anyone has any comments on that wild Microsoft Surface "mouse," or "knob," or "dial," or whatever.

https://www.microsoft.com/en-us/d/surface-dial/925r551sktgn


I own one!

I'm a UI/UX designer who was historically very invested in the Surface ecosystem during the early Win8/10 era, so this was a day-1 curiosity purchase for me.

As a physical object, I like it a lot. It's not _perfect_, but it has a great weight, the Dial has a satisfying resistance (like a 70s stereo knob), uses AAAs, and the haptic feedback is solid for a digitally-triggered vibrating motor (vs a literal ratchet).

What I find, however, is that it's a superior consumption tool vs creation tool. It's at its best when being used as a single-function knob. Great for sitting at a desk while reading a whitepaper, or perhaps controlling volume while listening to music. Day to day I use a mouse with a stepped-wheel, but the smooth scroll on the Dial makes it my preferred way to handle longform content. Keeps a long article flowing.

However, it has not become an indispensable part of my workflow. That may just be me. I need to pivot between Windows/macOS/Linux over the course of a day, so a lot of its proprietary-tech promise is wasted and I've built fewer habits. I'm also not sure offhand how good the integration is with design tools after the initial fanfare around Adobe, Surface Studio, etc.

Other tricky part: It's bluetooth. Once it's awake and in-use, that's...acceptable, if you aren't actively looking for tiny bits of input lag. But if you need it on-demand (sudden noise from the speakers, etc.) and you haven't used it in a while, it may be a couple seconds before it's responsive to that first input.

Still, it _is_ intriguing enough to me that years-on, it's still on my desk and is one of the only bluetooth devices I own, much less tolerate using. I think it's at it's best when it's a context-sensitive linear actuator with some light multifunction capability vs a new paradigm.


Did you ever own a Griffin PowerMate[1] to compare it with? They were out in 2001 as a USB device I always wanted to play with but never had any reason to buy or use. In 2018 they were discontinued and replaced with a Bluetooth version, by the looks of it.

[1] https://i5.walmartimages.com/asr/8ad6a841-3639-429a-8163-9f3...


woah! Thanks for the link!

But I have to admit "press and hold, then rotate..." makes me think of child-proof bottles. So with a touch of arthritis, it might be a love-hate affair.


Back in the day at hp we had "knob boxes" for the then relatively new 3D CAD systems. Some MEs loved them, others did not.

Here's what they looked like: http://www.hpmuseum.net/display_item.php?hw=684


While we're talking about HP, it's also worth noting the HP Sprout PC they tried from 2014-2017 that facilitated projecting (and then 2d/3d scanning content) on a 'touchmat': https://en.wikipedia.org/wiki/Sprout_(computer) , https://youtu.be/GZMeY8leQBM?t=137


The best part of Sprout was the "turntable", which allows auto-positioning an object at various angles relative to the structured-light 3D camera. I was able to snag a couple and use them with our other 3D acquisition systems.


Knob boxes have had a comeback, for controlling photo values in Lightroom.

There are plugins for using generic MIDI controllers (for musical instruments) that have knobs, sliders and buttons. Then there are also dedicated controllers that do basically the same thing, but with specialised labels on the controls.


The SpaceMouse is a great piece of interaction design/mechanical engineering; makes CAD much less painful. (Never tried it on a spreadsheet!) The key is the mechanical springs in it which provide just a bit of resistance without impeding precise control. It has a bit of a learning curve, kinda like when Macs reversed the finger scroll gesture on trackpads, but then it becomes part of my left hand while my right hand mans the mouse.


Honestly I did not find the learning curve that bad. It took me about 15 minutes to get as fast with the SpaceMouse as I am with a regular mouse. However the skill ceiling is high and there's a lot of potential to get really good with it.


The SpaceMouse looks really cool! I might pick one up and see if I can get it working outside CAD software (which are the only demos I could find). Has anyone messed with it and can give me an idea where to get started? Like, what's the interface it uses to talk to software? Can you use any old joystick library like SDL2 or it is something custom?


The company provides and SDK. There’s also an open source driver/SSK called SpaceNav.

The device has been used as a video game controller by someone playing Elite Dangerous. It was also used as a robot controller by NASA.

My one caution is that it’s quite hard to isolate control of individual axis. This is fine in CAD - if you accidentally move a tad in the Z axis while you’re trying to rotate, it doesn’t really effect anything. But depending on what you want to bind the mouse to it might make it really unusable.


I was going to write you off as one of those weird trackball people, but then I dug into the mouse. It does look like a useful tool for CAD in conjunction with a normal mouse


Not only that. Nothing beats flying through Google Earth with a SpaceMouse!


Definitely. In Google Earth Pro there are a few settings to tweak in the Navigation options. Turn on Enable Controller, of course, and turn OFF Reverse Controls and Enable Visualization.

I also like to turn on "Do not automatically tilt while zooming". I don't recall if that affects SpaceMouse, but I don't like the default automatic tilt when using regular mouse or keyboard navigation.

Also if you are using a ThinkPad, be sure to try each of the three mouse buttons (hold one down and move the TrackPoint around) when not using the SpaceMouse.


I believe there was one of these, or a device very much like it, in Google's headquarters in mountain view, as part of a wrap-around google earth display that lets you zoom around 3d space.


What's wrong with trackballs?


Perhaps weird, definitely not a trackball person!


A tree of code scopes is a graph, and you have one per module, plus a navigation history, which makes it 3d, even 4.

A device like that could be adapted for dev.


Can you use the space mouse in 3 games? That would be cool!

I don't do cad work, but I'm tempted to get a SpaceMouse for the sheer coolness factor.


I believe someone has rigged one up to work with Elite Dangerous. There is no broad support, however. I would love to see a game designed specifically for the SpaceMouse


I wish more people experimented with hand input since this has essentially been solved in recent years due to advancements in computer vision: https://mediapipe.dev/. Yeah it might be awkward to ask the user to activate their camera, etc etc. But right now I'm barely seeing any experimentation in this direction which is a shame.

Part of me wonders if it's just that people don't know that this is solved on the web, so make sure to go here and try it out and make something if Brets article appealed to you: https://google.github.io/mediapipe/solutions/hands#javascrip... + https://codepen.io/mediapipe/pen/RwGWYJw


Asking for camera access is not "awkward". In 2022, it's assumed that if I say yes, you'll record absolutely everything the camera can see, store it forever, sell it to anyone who wants it, and use it to show me ads for crap no one wants. There are a whole host of amazing possible apps using camera, location, microphone, etc., but tech companies have proven that they cannot or will not deliver these apps without egregious privacy violations.

Tech companies need to prove they can limit themselves to using data for the purpose for which it was requested, then we can talk about whether or not I'll give camera access.


I think one way this can be more acceptable is to redirect the camera to view the hands above the keyboard instead of having to wave my hands in front of my face. Interesting to be able to enable gestures to swipe desktops without using the trackpad.


The LeapMotion was basically a really accurate Kinect for your hands, and that was back in 2014. I always wondered why it never took off, and I suspect it's a market issue more than a technological one. Waving your hand in the air and doing hand gestures just didn't provide enough benefit I guess


I had one of the LeapMotion input devices back around when they launched, and really did try to use it in earnest - using a whole bunch of shortcuts and AutoHotkey hacks to navigate the OS with it. There were even experimental programs people wrote to input text with it, using something similar to chorded keyboards.

It didn't really work out though. Long story short: my arms got tired. Turns out that it's a kinda fundamental problem with how they had designed the interface - hovering your hands above something for extended periods of time is simply just tiring and uncomfortable.


That seemed pretty obvious the first time I saw it, but I figured maybe I was over-estimating the issue. I mean, here's this darling company with a product everyone's excited about. Surely they must have thought about that!

I guess they did not think about that.


The Leap Motion still does hand tracking better than MediaPipe, and it's still the best hand tracker I know of (besides larger devices like the Oculus Quest).

We've got an open source library for mobile hand-object input [https://portalble.cs.brown.edu/], and the version with Leap Motion is really nice, but doesn't directly work with a phone (we had to pipe data through a compute stick to make it work).

I'd love to see MediaPipe Hands match LeapMotion precision some day, but I'm not even sure if it's possible. A real depth sensor goes a long way.


Oh definitely, it just goes to show that a lot of the sci fi gesture-based interfaces just aren't very practical


I really wanted to play with it but never did because I had to buy a whole thing (that I might only use once and there wasn't many people developing for). With it being available on the web (without a device) things might be different since the the bar to try it out is much lowered. (After all oculus has tons of fun games and things that use hand tracking so I don't think you can conclude the idea doesn't have potential.)


"really accurate" that wasn't my experience of the leap motion at all, we had one alongside an occulus rift dev kit in a museum I worked for and it was essentially impossible to use, even for their inbuilt demo games. We got a replacement and had the same experience so just gave up on it.


> "I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade."

Anybody remember the 2008 Blackberry Storm, the company's iPhone competitor? It had a "clickable" glass panel, to bridge the UX gap between the tactile KB BlackBerry phones and the touchscreen-only Storm. Touching an icon required tapping the glass panel, which had some give, making it feel like a big button.

It was far from a perfect experience. The touch events all worked, but I always felt that I was about the break the screen with each successive touch.



Doesn't matter if it's from 2011. It's evergreen


Except for the Youtube embed shown right at the start of the article.


Mirror of the video if you're interested: https://www.youtube.com/watch?v=KytMZOLyF4Q


I was wondering why this sounded familiar and it's from 2011. Here are various things that have been invented along the same lines as Bret mentions.

* https://dynamicland.org/ - Bret Victor's vision, looks really cool * Kinect was released (November 4, 2010) a little before this article and presented another vision of future, but the market didn't think so * Oculus now detects hands and I'm pretty hopeful this will add more gestures and similar gait detection will be huge for interfaces

All in all, the incremental changes are starting to look more like what Bret is suggesting rather than purely "pane of glass"


Unfortunately I think Dynamicland is dead. The physical space in Oakland doesn’t exist anymore. It sounds like only Bret Victor and maybe one other person are left and Victor is relocating to a university Biology lab to try to implement his ideas there.

Source: Andy Matuschak mentions it in https://www.notion.so/blog/andy-matuschak

One thing which comes to mind is that Dynamicland is a strange laboratory. It was a space in Oakland that is no more, but it's a physical environment where the primary activity being undertaken was creating this very unusual computing system.

And in fact, that's exactly what the principal investigator is doing right now. He's picking up and relocating the work to very interesting synthetic biology lab, where maybe now that the further development of the system will happen in a way that's meant to support this professor's research.


Two things about Dynamic Land that I love.

1) Bret brought the computer into the world, instead of bringing the world into the computer, e.g. Oculus or Vive.

2) The operating system that senses the world and reads instructions from objects is influenced by Smalltalk, and from what I understand allows for Smalltalk like programs to run on it in the form of object instructions and interactions.


Ad 2, why do you think Realtalk has any resemblance to Smalltalk?

https://colelawrence.com/posts/2018-12-06-distribution-model...


> Kinect was released (November 4, 2010) a little before this article and presented another vision of future, but the market didn't think so

The Kinect has pretty much dried up for video games, but the company that developed the first version of the Kinect for Microsoft was later purchased by Apple, and their technology underpins the FaceID tech that appears in every iOS device these days.

(Apple has also had rear-facing Lidar on their iPads & iPhones for a few years now, and I believe that it is also an evolution of the Kinect tech, but I don't know for sure.)

I am disappointed that it withered away for video games, since it was really interesting & fun technology.


Our computer interactions could be so much richer if we allowed computers to observe and listen to us and our surroundings continuously. They would learn faster, and have the full context of everything we say or ask for or point at or do.

But we don’t allow that, because we are (rightly) worried that doing so would give all our private and sensitive personal information to greedy companies and invasive governments.

The future of computer interaction depends not on better hardware or algorithms, it depends on trust. And discretion. Solve those problems and you will unlock huge potential.


They do such a bad job with the context they currently have, that I can't see adding more context doing anything other than confusing them.


Since this is from 2011 you should take a look at his future of interactice design:

https://dynamicland.org/


I mention it elsewhere, but unfortunately I don’t believe Dynamicland exists anymore as a physical space.

https://news.ycombinator.com/item?id=31531398


Why is it that, without fail, design sites talking about user experience have a tiny gray font?


You may use reader view, or zoom in. I opine that good design isn't just about being usable to the lowest common denominator (like huge font for grannies) but rather giving the user the power to customize their own experience.


Well this granny doesn’t have perfect vision so it’s nice when the default isn’t tuned for perfect human specimen, especially when the default doesn’t seem to buy us anything (what does grey text accomplish?).

Not that this website is that bad... I just find this attitude to be annoying. :) Maybe I’m still in partial denial about my deficiencies.

Thank goodness for my browser’s reader mode though when the website has way too long lines. No credit to the website designers, of course.


Could you tell me what you think of guides on allaboutberlin.com? I care a great deal about readability, but I’m not sure if I’m doing it well. I made the font a big bigger on desktop this week. Now I wonder if Open Sans was a good choice.


I think it looks good. Decent font and size.

I’m not sure about some of the footnote (raised) style hyperlinks. They could be difficult to use on mobile.


This means the "good design" comes from the additional functionality provided by the browser to overcome to the poor choices of the site.


This was popular back then (2011) when screens had lower resolutions, and more emphasis were on visual design as opposed to UX.


Related:

A brief rant on the future of interaction design (2011) - https://news.ycombinator.com/item?id=21116948 - Sept 2019 (153 comments)

A Brief Rant on the Future of Interaction Design (2011) - https://news.ycombinator.com/item?id=6325996 - Sept 2013 (35 comments)

A Brief Rant on the Future of Interaction Design - https://news.ycombinator.com/item?id=3212949 - Nov 2011 (150 comments)


Perhaps it’s the 2011 date that makes it odd, but I think the author is picking way too much on dumb marketing commercials, when there are countless of companies in the world shipping actual products fitting these ideas.

For instance Nintendo has been exploring that space for so long, coming up at mass market level with different paradigms to interact with their devices. Microsoft has a long history of UX/UI research and actual shipped adaptive products, including for people with special needs. We’re going way past 2011 but there is also a ton of research on better haptic feedback and ways to make the “R” part of VR more real.

On the other side, I’m sure I’m not alone in disabling most of the input haptics and sounds and animations when setting up a new phone or laptop. Glass is fine for visual things, and I’m also fine with only visual feedback when using hand tracking on the Quest2 for instance, and the stuff I am manipulating are menus and lists, and I’m fine with no tactile feedback for fundamentally virtual concepts.


I think in many ways, you're reinforcing his point. Your perspective here is one that doesn't even consider ways of interacting beyond touching a flat screen.

When you talk about haptics, you only mention passive haptic feedback: annoying bumps, buzzes, and rumbles. You're right that those are annoying and mostly useless (though the haptic feedback on MacBook Pro trackpads is quite nice).

But that's not what he's talking about at all. His whole point is that interacting with tool's isn't "fundamentally virtual". That's a choice that technology has made because screens of pixels are so amenable to software control.

If you want to, say, decide which restaurant to eat at, there's nothing intrinsically visual or 2D about that. We assume that a screen is the only natural way to do that simply because we're used to that paradigm, which is exactly the problem he's ranting about.

Imagine a "restaurant tray". It has, I don't know, physical sliders and buttons at the top where you can specify what kinds of restaurants you're OK with. When you do, a bunch of tokens appear on the tray for every potential restaurant. You and your party can reach out and grab them. Group a few together as potential ones. Sweep the ones no one wants off to the side. Maybe let each person take and hold their favorite, or pull them over to their side of the tray. Swap and trade them like poker chips.

Think about how much more immediate and collaborative that process would be for reaching agreement on where to go.

That's the kind of stuff he's talking about. You might be thinking, "Well it would be too hard to build a system that creates physical tokens for every possible restaurant." But, again, that's a technological problem we'll only solve if we have a vision for those kinds of experiences in the first place.

If we can't see beyond pixels on screens, we'll never get there.


Thanks, I think I better understand the point he is making.

I would call that the “why don’t we have flying cars” kind of thinking, and the restaurant example feels the same to me. To the “that's a technological problem we'll only solve if we have a vision for those kinds of experiences in the first place.”, that’s a valid point. I think were I stray from that is I don’t feel the appeal of the vision.

A restaurant giving me complex “a choosing experience” instead of presenting the traditionally “flat” list and menus is not an improvement from my perspective.

In general, I think we’re at a stage where interface that make more sense physically are still physical. For instance cars are completely electronically driven but we (still) have physically optimized controls. My oven is a computer, but I have nobs and buttons that click. My “smart” lights have rotating controls to adjust brightness, etc.

These interfaces are well into the world and I feel they’re not in the article because the author (and you perhaps ?) take it for granted.


> In general, I think we’re at a stage where interface that make more sense physically are still physical. For instance cars are completely electronically driven but we (still) have physically optimized controls. My oven is a computer, but I have nobs and buttons that click. My “smart” lights have rotating controls to adjust brightness, etc.

Unfortunately, we're already sliding in the wrong directionn even on those examples.

Many newer cars have huge touchscreens and relegate many controls to it that used to have dedicated tactile inputs. A lot of newer electric ranges put all the controls on a screen under the same glass surface as the cooktop. Many smart LEDs only have an app to control them.


Sounds like you're describing an electronic chess board[1] being hacked to be a general purpose input. I don't think I've seen something like that before. Start by unlocking your computer by moving the pieces to the right positions, then give the Queen a McDonald's hat, the King a BurgerKing hat, one Bishop a Taco Bell ringer, and proceed in this fashion until one of you checkmates your way to a dinner decision...

[1] like https://www.adverts.ie/other-electronics/dgt-electronic-ches...


It was just the first random idea that came to mind.

But the general point is that when thinking about "interface", we often don't even consider that the interface could be made of physical things and instead implicitly assume pixels on screen. The idea that you could interact with a system not using video is practically unthinkable. Victor's point is that we can't invent what we can't imagine, so it's important to always have visions that seem unattainable and not give in to accepthing things as they are.


Can you give other examples of countless? You've listed two companies, Nintendo with their Switch controllers and Labo and Microsoft with things like the Surface Dial, that are some of the only companies exploring stuff like that. You make it sound like it's ubiquitous when 99% of people interact with the digital world through one or two fingers (or a few more with a keyboard or a hand with a mouse).


Google went with squeezing on past phones, Sony put a tremendous effort on the trigger mechanism for the PS5, Panic playing with the crank handle on the Playdate, to stay on very public products. And yes I see a lot more on up to come products designed for VR.

I agree 99% of people interact with very shallow feedback, and that’s I think a reasonable state, 99% of the applications we use are effectively shallow and require very simple input and only have a basic output (it makes me think about the amount of people using no more than 2 or 3 keyboard shortcuts for their daily tasks on a desktop computer, they’re not stressing about getting more from these interactions)

PS: I feel silly not mentionning on Apple or Samsung’s stylus with pressure and angle sensitivity…


Steam's VR controllers and the Steam Deck play with a distinction between "touch" and "press" with buttons/joysticks, for more intuitive multimodal controls. For example, various Steam Deck control schemes have gyro aiming, but only while you have thumb touching the stick. It's one of those things that's very intuitive in use but hard to explain the "why" to somebody who hasn't tried it.


>Glass is fine for visual things, and I’m also fine with only visual feedback when using hand tracking on the Quest2 for instance.

Hard disagree here. As impressive as the hand tracking in the Quest 2 is (especially the new 2.0 iteration - just...wow.), I'll prophesize that until at least some kind of haptic feedback system is available (Powerglove 2.0! :) ) it will hit a brick wall. Hard. It's such a natural mode of interaction that the total lack of haptic feedback sticks out like a sore thumb and breaks the immersion hard - so much so that I find myself going back to the Oculus controllers after a few minutes because they feel(!) more natural. They only might be a crude first order approximation, but at least there I have a "power grip", for example.


I find this pretty interesting in that we still haven’t found a practical “raison d’etre” for VR, which makes the great and deal breaking parts completely different for each of us.

I see the future of VR as two-fold, with the use as a specific tool on one side, and games/entertainment on the other side.

On the tool side, I see hand tracking as 90% there, would use it mostly for navigation and input, and totally see myself in excel like applications with finger gestures and Minority Report like movements but actually well thought and useful.

I am with you one the game/entertainment side, where more feedback and immersion would tremendously help enjoy being in the moment and feeling things that aren’t there.


Imo, the idea that it would be great to type on a glass touch screen is Steve Jobs biggest and most long-lasting misstep.

Overall I think it's interesting and bizarre that both modern technology and visions of the future have totally sacrificed tactility. It seemed to be all about removing the real world: tactile interfaces are old fashioned, in the future everything works in a way that has minimal connection to reality, e.g. a Minority Report style UI, and so obviously is ethereal and cannot be touched. It makes me wonder why we had that ideal in the first place, and whether that ideal shaped technology or vice versa. Why did we fantasise about losing tactility?

Something I've also noticed is that we almost seem to be unable to imagine a programmable tactile interface, even in science fiction. I guess humans wanted "something extra/futuristic/other-worldly", and that meant having things be unlike anything else in the world, which as the author points out, means something without tactility.


IMO tactility was sacrificed in favor of versatility because we don't have a material that is both tactile and versatile and simulating such a thing is hard. And IMO it was a good tradeoff to make, instead of a tactile keyboard you have a keyboard that is infinitely adaptable. Maybe that changes with AR advances, could be gloves? embedded chips that send electrical impulses to your hands? an actual material versatile enough to be tactile and adapt to all the needs of a modern computing environment? But all those seem pretty far fetched to get into consumers hands in the near future and certainly back then. There are slight advances here with things like haptic engines in phones but it is still a far cry from the feel of a real tactile substance.


IIRC, there was a US Navy ship whose stations had physical buttons, knobs, and switches that were software controlled, as in they mapped to different function depending on if the operator selected the weapons, or navigation, or radar, etc. functions.

You still had the tactile interface, but you gained the flexibility of the dynamic interfaces.

Not sure how well it performed though.


I think that's a pretty common design pattern for industrial/reliable applications. Multi Function Displays have been common in "glass-cockpit" aircraft for a long time. The F16 notably also feature master mode switches on the group of buttons below the HUD. It allows the operator to quickly switch between air to air, air to ground, dogfight modes, etc, bringing up relevant displays and controls at a moment's notice. I've heard that pilots consider F16 one of the best designed in terms of interface.

Soft buttons are also pretty common for test and measurement equipment. I own many modern digital oscilloscopes and spectrum analyzers, all of which feature physical buttons around the screen that are selected by software to do different things


That's pretty interesting.

Apparently touchscreen throttle and helm controls didn't work out, though, and were removed. I haven't read anything about this since 2019, but here's the link: https://www.theverge.com/2019/8/11/20800111/us-navy-uss-john...


The war on tactile interfaces is doing incredible amounts of harm. Personally my life got perceptible better after I 3D-printed mouse buttons and glued them on top of my laptop's clickpad.


Could you share your model for this? How well does it work? I've considered doing the same.

The set of laptops I'm willing to run non-macOS on is limited by the set of laptops that have physical touchpad buttons, and that's a diminishingly small market segment :(


They work quite well. The biggest difference is that they're attached to the same plane so the sides of the gap between them don't move up and down independently of each other, which makes it slightly harder to tell which side is which by touch. The pad can still sense your finger through them so it knows where you're clicking. I can post the model after I return from vacation next week, but you'll have to shape and size it for the target touchpad anyway. It's essentially just a flat rectangle with a radiused corner that's half the length of the touchpad, with a semi-circle in the middle to index my finger on. The thickness is 0.6mm for the tallest part. This will vary depending on the geometry of the bottom edge of your pad. I used silicone glue so it's easily removable with no surface damage.

https://i.imgur.com/yQj8TYJ.jpg


I also have a preference for more tactile inputs and was looking around earlier at some different controllers, e.g. MIDI sliders, knobs, buttons that you can connect to pretty much any application. Or alternatives like the Surface dial or Streamdeck.

Interested to hear if anyone has a setup like that that feels nice and is actually useful?


For perhaps the ultimate example of "pictures under glass", contrast this product:

https://www.slatemt.com/

with a traditional mixing console, full of physical objects to press, grab, twirl, slide etc.


It is interesting to contrast this with the ableton push or the launchpad, which basically does the opposite; gives you a physical controller tied to a digital interface.


Surprise there is no mention of the terms: Gorilla Arm and Tactile Feedback (haptics).

These are the two reasons the "minority report" style of interactive design will never catch on for humans.

https://en.wikipedia.org/wiki/Haptic_technology

https://en.wikipedia.org/wiki/Touchscreen#%22Gorilla_arm%22


Is there any research into causing flat surfaces to simulate a bump while still remaining flat? I think I've read about interfaces that can raise or lower some parts for buttons, but it would be a lot more convenient to have a glass display where you can "feel" the outline of a button electro-chemically without it actually being there.


They do fingerprint reading through the screen on mobile phones by using ultrasound; I wonder if you can up the intensity to provide a haptic affordance to on-screen 'features' like buttons and such.


I've seen demo mobile apps that used the vibration motor to give haptic feedback, simulating the feel of buttons and such as you moved your finger across the screen. At the time I thought for sure the next iPhone would feature "Haptic Touch" apis and it would just be a given in a few years. Still waiting...


UIFeedbackGenerator (https://developer.apple.com/documentation/uikit/uifeedbackge...) has been around for a while, but it’s not exactly capable of "positional" feedback. Still, vibrating in response to actions such as rearranging list entries or drag interactions that require a certain distance threshold to be passed can feel quite satisfying.


Well there is haptic feedback on the newer phones, usually to tell you you've crossed some drag- or timer-based threshold, but yeah it's pretty limited in scope.


bring back old school mechanical knobs


Says EVERYONE who drives a late-model car that sticks everything behind the sliding fingers under glass UX, from climate controls to the volume of the stereo.

This is a known problem, and one that will get people killed, because it demands that drivers focus on the infotainment screen and not the road to do a mundane task that their tactile fingers could easily perform on their own with mechanical controls.


It's really unbelievable how effective knobs, switches, buttons, dials, and dedicated screens or displays are.


Using synthesizer hardware is a joy of an experience.


It's interesting tha synthesizers went through this already.

In the 70's and early 80's--analog synthesizers--everything was knobs, sliders, and real buttons.

Then everything was stuffed behind a minimum of buttons and maybe 1 or 2 knobs/sliders if you wer lucky, and a plethora of options and menus buried behind often a thin 16x2 LCD display.

That was the early to mid 90's--analog sounds in electronic music started making a comeback, and by the latter half of the 90's and beyond everything started to have knobs and sliders again, even if the internals were no longer analog.

I do wonder if not for that trend driven by the TB-303 and what not, would that even have happened to synthesizers.

Maybe the same blacklash and pendulum swing will happen with everything else, but it seems like there's more pressure than ever to make users fit the product and save costs that way, rather than make the product for users and accept the price increases.


The most praised multi purpose keyboard maker out there is Nord, and the whole point of their stuff it's that the interface is physical knobs and buttons


Young interaction designers have been ranting about this for a while, but I for one do not want what they think I want. I like flat screens and want flat screens, even in VR


Not totally on topic, but I just bought a new Mac running Monterey. I've been running Win 10 and 11, and Ubuntu since 2014 or so, otherwise.

And the current OS X interface is nauseating. How we had decent GUIs when computers were 1/1000th what today's are, and this flat, ugly, undefined, pasty BS is acceptable is beyond me.

No joke, I'm returning the M1 machine and will go back to dual-boot Win11 and Ubuntu. Sorry, Apple, but you've lost your way.


Great stuff, especially posted to a programming community. I wonder if coding a system was more spatial & tactile. I think may just make it easier to get back into a project after setting it down for a month or more, who knows.


Hands feel and manipulate things because they evolved in a physical world where that type of interaction was most efficient. The digital world opens up new possibilities that don't need physical manipulation to be controlled. It feels a bit luddite to restrict interaction just to physical manipulation. I agree, sometimes designers go too far, e.g. in the case of touch-screen hvac controls in cars. However, there are also examples where physical manipulation is not desired. For instance, the smartphone is so powerful because it's not tied to one physical mode of operation. It can have buttons for a calculator, or a keyboard, or serve as a book, or a video game controller, or...

Given that this was written in 2011, I commend the author for having an opinion, but it has aged rather like milk to me.


> For instance, the smartphone is so powerful because it's not tied to one physical mode of operation. It can have buttons for a calculator, or a keyboard, or serve as a book, or a video game controller, or...

Smartphones are powerful because the interface is reconfigurable in software. That's orthogonal to whether the interface is tactile or not.

Imagine you go to choose between two smartphones:

1. One is like you have today: a flat surface of glass with colored pixels underneath.

2. The other supports a surface that physically changes in response to the application. Open the calculator, and a grid of number buttons appear. Tapping one gives the satisfying click of an old calculator. Switch to a synthesizer and a row of piano keys appear. The have the bounce of a weighted piano and play louder or softer based on how hard you press them. Open a game and a D-pad and joystick materialize.

I know which one I'd pick.


The author, Bret Victor, put his money where is mouth was and founded a tangible computing organization named Dynamic Land[0] that is funded in part by the research organization led by Alan Kay.

Dynamic Land might be the most interesting approach to collaborative computing in person that I’ve seen to date.

Bret designed the system, wrote the operating system, and the libraries used to interact with the OS via physical objects. It’s awesome.

[0] https://dynamicland.org/


I find Tilt Five[0] to be 100x more interesting than Dynamic Land. They don't serve entirely the same purpose, but I think it illustrates the difference between clinging to the past (Dynamic Land) and embracing a digital future (Tilt Five). Both solutions are powered by projectors and sensors, but Tilt Five pushes the envelope a lot further than Dynamic Land.

[0] https://www.tiltfive.com/


Conversely, I find Tilt Five to be nowhere near as interesting to me as Dynamic Land. Tilt Five doesn't push the envelope on computing further as far as I can see, but rather is just an expensive game peripheral for windows and android phones; Dynamic land is attempting to develop and enable new forms of media, especially communal media. The two projects shouldn't be held in the same breath IMO.

Besides, the Tilt Five developer program YouTube commercial[1] is the worst. It's like an xbox 360 reveal video at CES or a Qualcomm presentation about digital natives.

[1]: https://www.youtube.com/watch?v=a-7lHqyBQeg


I’ll have a look at this, thank you.

In what ways do you feel like TiltFive supersedes what Dynamic Land is doing?

[EDIT]: After reviewing the marketing materials, it looks to me that this project is orthogonal to the goals of Dynamic Land, and they don’t really serve the same purpose.

1) Dynamic Land is intended to allow for computing by interacting with real physical objects, and seeing the outputs displayed back in the real world without augmented reality hardware.

2) TiltFive seems intended to allow for holographic display of traditional or specialized game content onto physical objects. More like advanced AR than tangible or physical computing .


I guess my point is that what Dynamic Land is doing is not compelling, so there will never be a perfect analog to compare it to, unless you compare it to another initiative that is also not compelling. Maybe Dynamic Land makes a cool demo, but the approach isn't capable of producing anything generally interesting. As far as I can tell, there have not been any updates on the initiative since 2019, so it looks like Dynamic Land might be dead. What I find interesting about Tilt Five is that they bring computation into the physical space. It might not be tactile, but it's at least usable for something. Another comparison to Dynamic Land is that Dynamic Land tries to create computation in one shared environment. Tilt Five, on the other hand, allows two people in separate physical environments to share a digital environment. These are just two examples of how a digital-first approach is more compelling than Dynamic Land's analog-first approach.


> I guess my point is that what Dynamic Land is doing is not compelling

Have you used it? My gut feeling is that Dynamicland is likely something that has to be experienced to be understood. I saw Bret Victor present on Dynamicland at a design conference, and there are tons of little happy and unexpected accidents that come from people using it and experiencing it. It's stuff that you can't throw into bullet points.


You're creating an artificial dichotomy here with empty buzzwords.

What exactly makes one the opposite of the other?


The question should be, can we have a device with a programmable real-time tactile environment?


I think kinematic UI's would at first seem new and fun, but then become tedious. Sign language has less bandwidth than speaking or typing and expends more calories.


> Sign language has less bandwidth than speaking...

Do you have a citation for this claim? The research I can find seems to show that signers and oral speakers convey concepts at roughly the same rate with ASL perhaps slightly more efficient than orally spoken english.

(Signers use fewer signs per second than oral speakers do words but compress more information into each sign, or perhaps simply omit extraneous information that has no effect on the meaning.)


Thank you for the correction. In hindsight I sound foolish: e.g. sign-language interpreters keep up with the speech they are translating, eh?


Does anyone know if it's possible to visit DynamicLand in Oakland? Massive Bret Victor fan.


The future of digital interaction will be three dimensional and skeumorphic.


Did I read this article wrong? Are hands a metaphor for something? A lot of people can't and/or eventually won't be able to use their hands. This was as true in 2011 as it is now.


Are the "pictures under glass" interfaces critiqued in the article really any better for those who cannot use their hands? I have full use of my hands, so I am not sure, but the few people I know with limited hand control use some specialized input device that they can operate with their mouth, eyes, or head.


404


Speaking of interaction design, nice mechanical switches, how fantastic hands are at manipulating things, and The Adams Family Thing:

https://www.youtube.com/watch?v=0tQq7OTygyg&t=59m35s

>Constructionism2016 Session 16: Plenary 4, Cynthia Solomon

>One of the funny things Marvin Minsky did in his younger days is that he spend time with another very famous computerist, Claude Shannon.

>And Claude Shannon and Marvin came up with The Most Useless Box In The World.

>It, uh, I have a video of somebody... What it is, is um, actually, Claude -- Marvin designed it, and Claud built it.

>And it's a box. You turn it on, and a hand comes out and shuts it off. It goes back in.

>People don't know, but that's Marvin and Claude Shannon. Claude Shannon was the father on information theory. That's what he did.

https://en.wikipedia.org/wiki/Useless_machine

>The best-known "useless machines" are those inspired by Marvin Minsky's design, in which the device's sole function is to switch itself off by operating its own "off" switch. Their popularity has recently been raised by commercial success. More elaborate devices and some novelty toys, which have a more obvious function or entertainment value, have been based on these simple "useless machines". [...]

>The version of the useless machine that became famous in information theory (basically a box with a simple switch which, when turned "on", causes a hand or lever to appear from inside the box that switches the machine "off" before disappearing inside the box again) appears to have been invented by MIT professor and artificial intelligence pioneer Marvin Minsky, while he was a graduate student at Bell Labs in 1952. Minsky dubbed his invention the "ultimate machine", but that sense of the term did not catch on. The device has also been called the "Leave Me Alone Box".

>Minsky's mentor at Bell Labs, information theory pioneer Claude Shannon (who later also became an MIT professor), made his own versions of the machine. He kept one on his desk, where science fiction author Arthur C. Clarke saw it. Clarke later wrote, "There is something unspeakably sinister about a machine that does nothing—absolutely nothing—except switch itself off", and he was fascinated by the concept.

>Minsky also invented a "gravity machine" that would ring a bell if the gravitational constant were to change, a theoretical possibility that is not expected to occur in the foreseeable future.

https://www.youtube.com/watch?v=Gw2Bq0HYu1M

>The Ultimate Machine

>Davide Moises (1973-), The Ultimate Machine nach Claude E. Shannon, Die Sammlung David Moises, Multimediale Installation, 2009 Technisches Museum Wien

https://www.youtube.com/watch?v=6iattYDKp3A

>SMALL moody useless box " leave me alone " box. This useless box has an attitude. It has a variety of movements and behaves very cute when you shut it off. [...]


Legend


“So here’s a vision of the future that’s popular right now. [video embed, black with the text THIS VIDEO IS PRIVATE]”

I like this vision a whole lot.


I don't think the author has updated the site since 2011. Here's a link someone posted above: https://www.youtube.com/watch?v=KytMZOLyF4Q

I don't think it's his vision; more like "anti-vision". ;)


I suspect this emphasis on tactility is sentimental nonsense. But we should definitely give it a serious try to find out!

What i would be much more interested in is eye tracking. I have three screens covered in interactivity, but the speed at which i can interact with them - which is the speed at which i can explore and experiment - is limited by the speed with which i can grasp my mouse, then shove it around to position the cursor. I'm convinced i could do many things so much more fluidly if the machine could see where i was looking at transfer focus there in a flash.


Drive a modern car where everything that used to be a button or switch is now hidden behind glass. Wanting the return of mechanical controls with tactile feedback is not sentimentality, it's usability.


This glass-based crap is the most blatant evidence of pervasive user interface CHEAPNIS. Yes, bring back buttons... and big chunky toggle switches... and especially rocker switches <3 Ditch the useless glossy glass crap and the el-cheapo membrane buttons that rot out RSN.


Rant: my car (Zafira) has an indicator stem that returns to centre when you activate it. Normally they stay offset so it's very clear the indicator is on. Pair that with a too quiet 'tick' sound and the indicator on the driver's panel being hidden behind the steering wheel ... it makes the vehicle so frustrating to drive. Argh, HCI is so important.

Also, while I'm here, the steering wheel is adjustable in height as in many cars, but I've never seen a vehicle with indicator lights on the driver's panel at the top and bottom (they're all at the top) so there's always chance you can't see them when you adjust the wheel height. I've driven a lot of cars as we often hire. Seems like a fundamental flaw to me.


> I suspect this emphasis on tactility is sentimental nonsense.

I don't think it's sentimental to think it's undesirable to have one of your senses muted.


I don't anticipate replacing my keyboard and mouse with a touchscreen for doing software development work.

If it was measurable, I would guess that productivity plummeted for most companies that replaced the traditional computer/mouse/keyboard with a tablet. I remember going to an AT&T store years ago and watching the customer care rep struggle to get my information into their system with an iPad. A five minute data-entry task on a computer took this person almost 20 minutes on their tablet.


It’s not sentimental nonsense: it’s enactive cognition.

I believe this is a hugely under appreciated capability of humans, possibly one of the keys to a type of genius.


How do you feel about typing on an iPad or iPhone or membrane keyboard vs a keyboard with actual keys and switches?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: