Hacker News new | past | comments | ask | show | jobs | submit login

I'd love to see some exploration of how VR could be used past the idea of "you have infinite monitor space now".



That's really all I ask. I doubt very much that any movement in virtual space is going to be as efficient as a keyboard and mouse, so just let me see everything and call it a day.

Relevant community commentary on the potential ridiculousness of VR interfaces: https://www.youtube.com/watch?v=z4FGzE4endQ


Honestly, that Community episode was the only thing I could think of during this discussion.

Unless someone thinks of something to increase information density in the VR environment by an order of magnitude (or several,) it will be about this ridiculous.


I think, the problem with programming is that it's essentially 1D text arranged a bit on a 2D space.

So 2D programming isn't really a thing. That's probably why 3D isn't a thing either.


That's only true if you don't consider the context that the code lives in. For example coloration in editors could be considered a visualization of additional dimensions of info (syntax, types) and autocompletes are like a branching "time-like" dimension. Besides that code mostly tries to solve problems that can be represented and show visually.

The need exists, how to turn that into a solution with a useable interface that I'd use for multiple hours a day is beyond me though.

For me Bret Victor's talks point to this problem where we treat code as something we only interact with as text but it can be so much more: https://vimeo.com/36579366 https://vimeo.com/64895205


Computers have essentially 1D instructions. They execute instruction after instruction. Making a 2D programming language would cause an extra layer of abstraction between the programmer and instructions. There would be performance and memory use issues.


Yes, maybe 2D for multiple processors?


My problem with programming is that I don't even want to look at the screen, type on the keyboard or use a mouse.

Constantly translating my thoughts and ideas manually through my hands and eyes directly is annoying.


Might be an easier way to handle parallelization.


An IDE that takes advantage of depth could be kinda cool. Imagine if every level of indentation resulted in text that appeared to be further way. It would be best to discourage spaghetti code by making it physically uncomfortable


That would be pretty cool though. Imagine coding on VR goggles on, and you move your head forward to "zoom in" to the code, with some eye tracker built into the goggles, move the cursor, and code. Using physical head gestures for navigation, if done correctly, has great potential.


Been thinking about this too.

I think this might open up new ways of interacting with systems that are more accessible than they are today. Currently, system diagrams are limited by how much info you can cram into 2 dimensions; without some kind of abstraction it becomes mostly too information dense to be useful.

What could we do with 3D? What if we could build and lay out different systems/ system designs in a 3D space? What if I could point to our MySQL cluster, our kubernetes cluster, our applications etc?

Only half formed thoughts but I am super excited by what avenues VR will open up for developers.


There was an interesting post that touched on that topic a while ago: https://news.ycombinator.com/item?id=24162703


I personally really don’t like staring at text in VR. There’s less angular resolution and more visual artifacting in the HMD compared to a good 72dpi screen (or even better, a retina screen) at 2 feet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: