
3D interfaces are usually worse than 2D interfaces - ingve
https://www.facebook.com/permalink.php?story_fbid=2407256322842204&id=100006735798590
======
DubiousPusher
I've been working in AR/VR for a few years now and fairly convinced of this.
In fact, beyond the OS interlace I'd say this is true of presenting most data.
The usefulness of most interfaces is that they abstract something real. Part
of the value of abstraction is the removal of most the extraneous aspects of a
problem. Removing a whole dimension removes a lot of data that may be
extraneous.

Maps are one very good example of this. A map is partially so powerful because
it removes a ton of information you probably don't care about. Occasionally
would you like to know a house is up a steep hill before you drive there?
Sure. But most of the time who cares. And if you really need elevation data,
there's a special map for that. A map which actually shows you information
about grade in much clearer way than even a 3D model would (being as humans
are actually really bad at guessing a grade by looking at a real slope).

~~~
jerf
I think it's more than that. As Carmack observes, you don't _see_ in 3D. You
see two 2D planes that your brain extracts a certain amount of depth
information from. As a UI goes, you can't actually freely use that third
dimension, because as soon as one element obscures another, either the front
element is too opaque to see through, in which case the second might as well
not be there, or the opacity is not 100% in which case it just gets confusing
fast.

You've really got something more like 2.1d to work with with humans; intefaces
need to take that into account, and maybe sometimes use that .1d depth
dimension to provide light hinting, but not depend on it.

So you're not _removing_ a dimension... you're _acknowledging it doesn 't
exist._ There isn't really a "third dimension" here to take advantage of. It
may be visually impressive if something flies from far away to up close, but
in interface terms, all that has happened is that the z-buffer values shifted,
but the interface still has the same number of square radians available that
it did before.

The depth information is a lot more like an additional color than an
additional spatial dimension. UIs are _better_ with color hinting available
when used well, but there's not much _fundamental_ difference between a color
2D interface and a monochrome 2D interface. Proof that it isn't much is that
there are some people who use settings to set their phone to monochrome, and
the result is at worst occasionally inconvenient, not crippling. Compare with
trying to go from a 2D interface to a 1D interface, which would be utterly
crippling and destroy a huge swathe of current UI paradigms.

To truly "see in 3D" would require a fourth-dimension perspective. A 4D person
could use a 3D display arbitrarily, because they can freely see the entire 3D
space, including seeing things inside opaque spheres, etc, just like we can
look at a 2D display and see the inside of circles and boxes freely. They have
access to all sorts of paradigms we don't, just like no 2D designer would ever
come up with a "text box", even though it fits on the 2D plane, because no 2D
user could see inside the text box to see what's in it.

(This is one of those cases where the mere act of typing it out improved my
understanding of the topic. I didn't have the "depth information is really
just another color" idea going in, but the more I think about it the more
sense it makes, in terms of the amount of information that channel can carry.
Just like color, it's not quite 0, but you can't really stuff _all_ that much
in there.)

~~~
DubiousPusher
True, almost any interface that overlaps helpful data in layers or stacks is
usually terrible. Hell, I don't even like desktop computer GUIs that allow you
to have stacks of windows. I'd rather see one thing at a time and cycle
between them. Or have a picker view.

That said we do actually get quite a bit out of that ability to see depth.
People who lose depth perception have quite a hard time adapting to the world.
And our spatial understanding seems to go beyond our vision. That to me is
where a 3D interface might be really powerful. Sometimes things which we
struggle to decode in 2D are just intuitive in 3D like knots or the run of
wires or pipes.

As I said elsewhere in this thread I think the 3D interfaces that are really
going to be powerful haven't occurred to us yet. And I believe that what we'll
find in time is that there are things which 3D interfaces are tremendously
advantagous for and using anything else will feel like a hinderance. But those
will be things for which 2D interfaces don't already do an amazing job.

~~~
jerf
Carmack already mentioned the existence of "true 3D" content, for which you
get a 3D interface whether you like it or not, so to speak, so I didn't go
into that.

But making everything 3D, because VR, is as silly as when the gaming industry
made everything 3D, resulting in entire console libraries full of games that
looked like shit even at the time, pushing 4 or 5 frames per second and having
other incredible compromises, when the same consoles are monsters of 2D
performance. As nice as it may be to have truly 3D content available in
psuedo-real space, there's no reason to insist that when you want to set the
shininess of a given pipe that you need a huge skueomorphic switch as big as
an old car stick shift that you can visibly pull popping out of your UI or
something when all you need is a slider. (If anything, I'd think minimalism in
a VR environment is a good idea, both to contrast the other content and to
prevent detracting from it.)

I think that's probably the kind of crap Carmack is complaining about. We've
already been around the same loop a couple of times already, and 3D, albeit on
2D surfaces, was one of them, so it's fair to look to the past instances of
such BS and maybe this time try to move along the curve a bit faster. I'd say
that if we can get this nonsense out of the way faster rather than slower,
we're more likely to get to the truly useful 3D stuff that doesn't exist yet.
Otherwise we risk 3D interfaces becoming something like the Wiimote, which
IMHO was actually a really useful tool that has become despised solely because
it was badly misused by so many games, because motion controls. (Another
example we've already been through.)

~~~
joe_the_user
Your points are excellent, classic.

One other hypothetical use of a 3D interface is as a way to conceptualize
"true N-dimensional" data. A 3D experience indeed doesn't help any rational
conceptualize of a rational situation but it might, maybe, allow you to
mobilize the unconscious reflexes humans have for dealing with regular 3d
space. But all this might also a 90s cyberpunk fantasy.

------
DonHopkins
I recently posted some comments by Dave Ackley about 2D versus 3D, which I'll
repeat here:

David Ackley, who developed the two-dimensional CA-like "Moveable Feast
Machine" architecture for "Robust First Computing", touched on moving from 2D
to 3D in his retirement talk:

[https://youtu.be/YtzKgTxtVH8?t=3780](https://youtu.be/YtzKgTxtVH8?t=3780)

"Well 3D is the number one question. And my answer is, depending on what mood
I'm in, we need to crawl before we fly."

"Or I say, I need to actually preserve one dimension to build the thing and
fix it. Imagine if you had a three-dimensional computer, how you can actually
fix something in the middle of it? It's going to be a bit of a challenge."

"So fundamentally, I'm just keeping the third dimension in my back pocket, to
do other engineering. I think it would be relatively easy to imaging taking a
2D model like this, and having a finite number of layers of it, sort of a 2.1D
model, where there would be a little local communication up and down, and then
it was indefinitely scalable in two dimensions."

"And I think that might in fact be quite powerful. Beyond that you think about
things like what about wrap-around torus connectivity rooowaaah, non-euclidian
dwooraaah, aaah uuh, they say you can do that if you want, but you have to
respect indefinite scalability. Our world is 3D, and you can make little
tricks to make toruses embedded in a thing, but it has other consequences."

Here's more stuff about the Moveable Feast Machine:

[https://news.ycombinator.com/item?id=15560845](https://news.ycombinator.com/item?id=15560845)

[https://news.ycombinator.com/item?id=14236973](https://news.ycombinator.com/item?id=14236973)

The most amazing mind blowing demo is Robust-first Computing: Distributed City
Generation:

[https://www.youtube.com/watch?v=XkSXERxucPc](https://www.youtube.com/watch?v=XkSXERxucPc)

And a paper about how that works:

[https://www.cs.unm.edu/~ackley/papers/paper_tsmall1_11_24.pd...](https://www.cs.unm.edu/~ackley/papers/paper_tsmall1_11_24.pd..).

Plus there's a lot more here:

[https://movablefeastmachine.org/](https://movablefeastmachine.org/)

Now he's working on a hardware implementation of indefinitely scalable robust
first computing:

[https://www.youtube.com/channel/UC1M91QuLZfCzHjBMEKvIc-A](https://www.youtube.com/channel/UC1M91QuLZfCzHjBMEKvIc-A)

------
pazimzadeh
I think there is a severe lack of imagination going on here. A pencil is a 3D
interface.

This is as good a time as any to post this eight-year old gem from Bret
Victor:
[http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...](http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign/)

~~~
comex
> A pencil is a 3D interface.

Is it? I’d say the VR analogue of a pencil is a pointing device, which makes
the user interface the VR analogue of _paper_. And paper is fundamentally a 2D
interface. There are some 3D actions like turning a page, but those are
secondary and rare. You can move and rotate the paper as a whole in 3D, and
that’s useful, but the same functionality can easily be added to VR 2D
interfaces.

On the other hand, there are other types of art that are fundamentally 3D,
such as sculpture and pottery – but both of those rely on feeling a 3D object
with your hands, which partially bypasses the 2D limitation of vision, but
isn’t yet possible to emulate in VR.

Then again, there’s also the common VR toy of a “pencil” that doodles in thin
air, which is certainly interesting... though I’m not sure how well it
generalizes to more abstract user interfaces. If you’re using such a system,
you have to constantly rotate the object you’re drawing, and/or move your body
around, in order to properly perceive the object in 3D. This is kind of a
pain. If your primary goal is to create a 3D object, it’s an unavoidable pain
and the benefit is well worth it; but if what you’re interacting with is just
an abstract interface meant to manipulate something non-spatial, it’s probably
better to avoid.

~~~
kragen
> both of those rely on feeling a 3D object with your hands, which partially
> bypasses the 2D limitation of vision, but isn’t yet possible to emulate in
> VR.

That's the entire point of the comment you're responding to and the reason for
Dynamicland.

~~~
comex
If so, it was off topic. John Carmack's post was about VR interfaces, and I
interpreted the parent comment in that context. In any case, I would contest
that bypassing the limitations of vision, specifically, represents a
significant part of the reason for Dynamicland.

~~~
kragen
If you "would contest" that, it's because you _still_ haven't read the essay
that was linked in the putatively off-topic comment you were "responding" to,
_after eight years_. And, yeah, you could try to redefine "VR" and even "3D
interfaces" (the _actual_ topic) as being strictly limited to "binocular video
responding to a head tracker", but even "VR" researchers have been researching
haptic and kinesthetic feedback from a zillion angles for a lot more than
eight years, so I don't think it's off-topic at all.

------
mncharity
Yes... but no...

Resemblance to reality as UI design smell... Just because we have head
tracking and stereo display, doesn't mean a UI should at all resemble reality.
Years ago, when cell phones were new, and UI design was focused on onboarding
novice users in an untrained culture, there was the idea of skeuomorphic ui
design - the app should resemble the familiar physical thing. The calendar app
should have lined pages, that flip around a spiral binding, surrounded by a
textured leather margin. We don't do that anymore. It would be silly. As is 3D
UI's resembling "spear chucking" reality. So yes, tossing xterms in the air,
and 3D UI's that resemble VR games, are both common and bad ideas. Quietly
assuming VR-gaming-like things cover the entire 3D UI design space, and
drawing conclusions built on that assumption, is another common and bad idea.

> Splitting information across multiple depths is harmful

Frequently jumping between dissimilar depths is harmful. Less frequent,
sliding, and similar depths, can be wonderful, allowing much denser and easily
accessible presentation of information. I so wish my laptop had a native 3D
display... DIY anaglyph and shutter glasses are a pain.

A general takeaway is... Much current commentary about "VR", is coming from a
community focused on a particular niche, current VR gaming. One with
particular and severe, constraints and priorities... that very don't
characterize the entirety of a much larger design space. And that community,
and commentary, has been _really_ bad at maintaining awareness of that. So
it's on you.

~~~
la_barba
Do you have links to example 3D UIs that you're talking about?

~~~
pj_mukh
Not OP but the team at Leap Motion [1] has some very interesting demos. 3D
UI's re-imagined from the ground up. Weirdly, because now everything is 3D,
the best you can do is try to mimic actually 3D things perhaps, something that
didn't work in 2D at all?

[1]
[https://www.youtube.com/watch?v=7m6J8W6Ib4w](https://www.youtube.com/watch?v=7m6J8W6Ib4w)

[2]
[https://www.youtube.com/watch?v=6dB1IRg3Qls](https://www.youtube.com/watch?v=6dB1IRg3Qls)

~~~
mncharity
They're fun demos. And perhaps from the perspective of say mobile, they're
kind of reasonable. Rotating your wrist? Waving your arm to press a button?
Waving your arm to move a window? Not all that odd when holding a phone.

But from the perspective of laptops/desktops? What comes to my mind is: RSI;
slowwwww, but maybe it's the touchpad version of a standing desk?; people use
tiling window managers in part because waving a finger to place windows in 2D
is _already_ too burdensome.

I've just now failed again to find nice video, but if you've ever seen it,
picture a professional artist tooling along on a Cintiq at insane speed.
Stylus, and two handed multitouch flying. Zipping through enormous menus.
Twitch gesturing through radial menus. Stylus input with pressure and tilt and
rotation. Now imagine that interactive area was several centimeters high. And
imagine transparent layers of content above the screen (you might sort of get
a feel, by moving a semi-transparent window in front of another, and switch
between reading one and the other, back and forth). Imagine a 3D visual
environment extending below and above the screen. And around it, throughout
the room. Now imagine your keyboard does multitouch, and instead of art,
that's your code.

------
hypertexthero
John Carmack:

> Last year, I argued that cylindrical panels were surprisingly effective, and
> we should embrace them for VR Shell UI. There was a lot of push back about
> giving up the flexibility of putting UI wherever you want in 3D, and
> curtailing the imagined future world of even more 3D interfaces, but the
> argument for objectively higher quality with the native TimeWarp layer
> projection kept it from being a completely abstract design question.

Perhaps something similar to this in 2D interfaces: I miss the original iOS’
embossed buttons with highlights and shadows that made their clickable-ness
clear, and find the text links we have now inferior. Older people I know who
have used iOS from the beginning also lament this change.

~~~
la_barba
Yes, many modern UIs are anti-UX dark-patterns of past.

------
madrox
I worked in a media tech lab that experimented with how to tell stories in VR.
The truth we encountered was that there was very little that VR did to enhance
storytelling. There's a lot of hyperbolic arguments that sound good, but don't
pan out in practice.

Reading Carmack's argument does make me wonder how much is objectively true
and how much is because we're trying to bootstrap a generation of 2D interface
users into 3D. Will this be one of those things that a generation of kids
raised in VR will roll their eyes at? I don't know.

------
bcheung
I generally agree.

I like keyboard interfaces for non-spatial input over a mouse because fingers
require less effort to move than arms and wrists. They can also move faster.
These problems are made worse in 3D.

I see 3D being useful in 2 cases:

1) Adding depth allows more keeping of context. This is similar to zooming in
and out by pinching to keep the context. But with 3D you lose less peripheral
view and occlude less of the surround content. Our brains are better wired for
spatial processing in 3D.

2) Fitt's Law predicts that the time required to rapidly move to a target area
is a function of the ratio between the distance to the target and the width of
the target. By adding another dimensions we can get objects closer and reduce
the distance.

------
tantalor
Cylindrical panels, TimeWarp, Medium, Quill

What is he talking about? Do Facebook posts not allow links?

~~~
Ajedi32
Cylindrical panels - I believe he's talking about positioning 2D surfaces
along the inside of a imaginary cylinder centered around the user. Like this:
[https://youtu.be/SvP_RI_S-bw?t=7](https://youtu.be/SvP_RI_S-bw?t=7)

TimeWarp - A technology for reducing percieved latency in VR.
[https://uploadvr.com/reprojection-
explained/](https://uploadvr.com/reprojection-explained/)

Medium - A VR sculpting app.
[https://www.oculus.com/medium/](https://www.oculus.com/medium/)

Quill - A VR drawing app.
[https://www.oculus.com/experiences/rift/1118609381580656/](https://www.oculus.com/experiences/rift/1118609381580656/)

~~~
kibwen
I can't watch the video now, but does it explain why the inside of a cylinder
rather than the inside of a sphere? This was my question after reading the OP;
if you truly want the UI to be equidistant from a single point (the user's
head) in two dimensions, it seems like a sphere is the obvious choice there.

~~~
maemilius
My guess here would not be to have the entire interface equidistant from the
user, but, rather, to have the equivalent of a wrap around monitor.

We're already used to having flat screens to interact with and simply curving
the display doesn't, in my experience with a curved monitor, have any
detrimental effect on your ability to view the display.

Beyond that, in a mathematical sense, a square and a cylinder are very
topologically similar. Of note is that the relative size and shape of things
on the surface are well preserved[1] as you don't need to significantly deform
a square to turn it into a cylinder.

This is not the case for a sphere.

As a practical demonstration, take a piece of paper and draw some boxes on it
before wrapping it around a reasonably sized ball. You'll see that the boxes
(particularly the larger boxes) become deformed in a way that likely will make
them seem to "bulge out". As far as I'm aware, the effect happens to any
regular polygon wrapped around a sphere. The same thing will happen to any
interface that attempts to project onto a sphere.

So, given that:

a) we're already used to flat interfaces

b) curving that interface into a semi-cylinder doesn't[2] have any detrimental
effects

c) that traditional square interfaces don't map directly/cleanly onto a sphere

It's easy to conclude that a cylindrical interface is probably the simplest to
make and easiest to understand interface. It requires the smallest amount of
work to make and the least learning on the part of the user.

1: Obviously, things on the cylinder will be curved, but the 2 dimensional
shape is still consistent.

2: for me, at least

------
anderspitman
I think most of us who get into data visualization go through a "3D all the
things!" phase. I certainly did. But eventually I had to accept that the
tradeoffs just aren't worth it. Occlusion is a serious problem. Our brains
simply process 2D much more efficiently.

Obviously volumetric data is an exception.

------
Animats
Well, yes. Ever since the old SGI 3D file browser [1], a true 3D interface
approach has been terrible. Tilt Brush is probably the best success so far.[2]
Even that, though, isn't very good for artistic work. It's just finger-
painting in 3D.

Without force feedback, human positioning in 3D space is terrible. Maybe if we
get usable VR gloves with force feedback, so you can grab a knob or lever and
feel the movement, stops, and detents, it could be workable.

[1]
[https://en.wikipedia.org/wiki/Fsn_(file_manager)](https://en.wikipedia.org/wiki/Fsn_\(file_manager\))

[2]
[https://www.youtube.com/watch?v=TckqNdrdbgk](https://www.youtube.com/watch?v=TckqNdrdbgk)

~~~
filoeleven
Check out this Smarter Every Day introduction to the HaptX force feedback
glove from last year.[1] The guy thought VR was stupid until he used the
glove, then was instantly immersed. They’ve packed a lot of sensory feedback
into it.

Once force feedback hits, translating 2D onscreen control surfaces to 3D
models will be both trivial and worthwhile, at least in the niche realm of
virtual audio devices. Touchscreens have already made them much more playable,
and things like TouchOSC allow for some customization. I’d love to have
something like the Reason rack[2] in a space where I could use all 10 fingers
instead of 1 mouse pointer to interact with it.

[1] [https://youtu.be/OK2y4Z5IkZ0](https://youtu.be/OK2y4Z5IkZ0)

[2]
[https://images.app.goo.gl/37FMUZ6kfgTb5rPf6](https://images.app.goo.gl/37FMUZ6kfgTb5rPf6)

------
rajnathani
> Splitting information across multiple depths is harmful, because your eyes
> need to re-verge (and, we wish, re-focus). > It may be worse in VR, because
> you have to fight the lack of actual focus change. With varifocal we would
> be back to just the badness of the real world, but no better. This is
> fundamental to the way humans work.

This point on the eye focus changes caused while viewing 3D interfaces (as
opposed to 2D interfaces) is a very well-founded one.

------
Vermeulen
the key is to use a 3d interface when the data is actually 3d, and only then.
3d: Level designing / modeling, physics, anything with spatial relationships
2d: Options menu, scripting and logic, numbers and info

since we read in 2d it's always going to be easier to parse

------
bluthru
He's right that changing your depth focus is more taxing, but I wonder what's
healthiest for the eyes? Maybe it would be good to for the eyes to change
focus depth while wearing VR headsets? If I stare at my monitor too long
without breaks my eyes definitely get lethargic while trying to focus into the
distance. (General tip: every 20 minutes stare at something at least 20 feet
away for 20 seconds.)

~~~
pugworthy
Curved monitors are basically meeting the same need - to maintain the same
focal depth as one looks side to side, shifting between windows, applications,
etc.

I did find a few papers with a quick search that covers the topic for
monitors. I would assume there is some carryover of any kind of finding about
curved monitor ergonomics to the cylindrical view idea in VR.

See
[https://pdfs.semanticscholar.org/d1c0/3f54a46b5fefc27b7a4207...](https://pdfs.semanticscholar.org/d1c0/3f54a46b5fefc27b7a4207fec7cf7634e6ea.pdf)
and
[https://www.tandfonline.com/doi/pdf/10.1080/15980316.2015.11...](https://www.tandfonline.com/doi/pdf/10.1080/15980316.2015.1111847)
for example.

------
allenu
It's really interesting to see Carmack post this opinion (especially as it is
two years old). I trust his insight and way of doing things, so it's
fascinating to see even a glimpse of how he operates in a larger company where
clearly their objectives and opinions won't always match his own.

I wonder in general if he's still bullish on Oculus (perhaps he is with VR in
general though).

------
dustbuster
I having been using the cylindrical layout shown here:

[https://arcan-fe.com/2018/03/29/safespaces-an-open-source-
vr...](https://arcan-fe.com/2018/03/29/safespaces-an-open-source-vr-) desktop/

for a few weeks now - far from full time as the HMD display, lenses (CV1 here)
and positioning/rotation tracking are not good enough with the open source
driver but it feels really close. Except for some tweaking and extra "spacing"
between the focus window and the side windows, it works exactly as I want it
to. I added some customisation so the 'layers' do not exist at different
depths but on different arcs on the same layer. Switching "workspaces" simply
rotates these layers to the 12 o clock position. Biggest problem right now is
not having front cameras to see what my hands are doing.

------
z3t4
One interesting fenomene is that focused 2d images appear to be more high res.
It would be cool if a 3d UI vould somehow know in real time what we focus on
and put that plane in focus. Somehow we can see if another human or animal is
focusing in on us, so it might be possible for a computer to also see what we
focus on.

------
akjetma
media molecule's Dreams has the most impressive user interface i've ever used.
it uses 2d panels laid out in 3d space but the panels can be anchored and
resized in screen space, object space or world space and it's extremely
intuitive. i highly recommend people interested in ux/ui check it out

~~~
bcheung
Looks very inefficient.

Pressing a key on a keyboard works better than using spatial input from a
pointer device for those cases. To click a button/icon there is a feedback
loop that requires vision processing, correction, anticipation, and hand eye
coordination. You have to iterate many times in this loop to move to the
correct location. Often the refresh latency makes this problem even harder.

This problem is better understood when you have a virtual keyboard and only a
single pointer.

Ctrl/Cmd-C is easier and faster than moving the mouse to Edit -> Copy.

Creative professionals rely on physical keyboards and buttons on the mouse for
actions and only use spatial input for things that are spatially relevant
(pan, placement, zoom, rotation, selecting surfaces, edges, vertices, etc).

~~~
Animats
_Creative professionals rely on physical keyboards and buttons on the mouse
for actions and only use spatial input for things that are spatially relevant
(pan, placement, zoom, rotation, selecting surfaces, edges, vertices, etc)._

That used to be true. Autodesk put a lot of effort into interfaces for
engineering in 3D. In Inventor, you only need the keyboard to enter numbers or
names. They managed to do it all with the mouse. Try Fusion 360 to see this;
there's a free demo.

~~~
bcheung
You mean the right-click radial context menus? Yes, those are really nice.
They require much less of a feedback loop and often don't require any visual
processing at all. They are also more gesture based not buttons. The
affordance is very generous compared to buttons and icons.

I use a 3DConnexion Space Navigator (6 DOF) in my left hand and mouse in my
right for selection when using Fusion 360 and often use the gestures on the
mouse.

I guess that brings up an exception. The context switching cost. Moving from
pointer to keyboard is very slow so gestures really help out in that regard.
If my hand is already on the keyboard then I have less reason to want to use
the gestures.

~~~
Animats
That, plus the ability to rotate, pan and zoom while you're in the middle of a
selection. That's a huge win. In 3D work, you often need to select 2 or more
things, and those things may be small and need precise selection. Precision
multiple selection is hard.

Before this was worked out, most 3D programs offered four panes, with three
axial projections, usually wireframe, plus a solid perspective view. Just so
you could select. Now we only need one big 3D pane.

------
SmellyGeekBoy
It probably needs to be pointed out that this is a post from John Carmack. I
almost didn't click on it due to the facebook.com domain but spotted his name
in the comments here. Obviously Carmack is still highly regarded and respected
in the industry and his opinions quite rightly hold a lot of weight, despite
his current choice of employer / platform.

------
delhanty
Here's another nice article from June 2018 touching on many of the same
points:

[https://hackernoon.com/predictions-from-the-first-ar-
os-9440...](https://hackernoon.com/predictions-from-the-first-ar-
os-94407773cc9)

>The way we organise our physical world is probably more relevant to AR than
how we organise our 2D interfaces. ...

------
ljm
I first misunderstood this as interfaces in 3D _games_ , and it doesn’t really
change the supposition.

Interacting with most AAA title UI is an exercise in extreme frustration. Days
Gone’s menu is like someone developed a fetish for prezto.js.

It’s pretty, but also pretty unintuitive.

Also beside the point a bit.

------
rocky1138
I feel like this needs examples. What exactly is he recommending as a gold
standard?

~~~
lallysingh
2d interface on the inside of a cylinder that surrounds the user, it seems.
Perhaps not all the way around.

~~~
ascagnel_
That makes sense -- switching depths is more taxing (and gets weirder in VR,
since you're not actually changing your eyes' point of focus), but a 2D
cylinder gives you as much horizontal space as you could possibly get.

------
RootKitBeerCat
I was angered by the title (a bit clickbaity, as context is king for a UI),
but upon reading agree 100% with the title.

------
firethief
Of course. We can't see 3 dimensions; we can only see what can be projected
onto a portion of a spheroid.

------
agumonkey
The point is abstraction, more sensory dimensions is very unlikely to provide
there.

------
postalrat
Imagine if we interacted with everything we encounter every day on a single 2D
plane.

------
Lendal
Facebook?! John Carmack doesn't have a blog? :(

~~~
SteveGregory
John currently works for Facebook as CTO of Oculus.

~~~
svnpenn
thats too bad as their interface is absolute garbage.

