Second question: how many data points can a visualisation be before you start to see noticeable lag?
Re: second question, I'm unsure, but I'll ask the team for some stress tests.
With that being said, it has to be done right. Not only that, I'd expect to be 'VR-only', not even just 'VR-first'. A visualisation tool that I'm expecting would have difficulty conveying the same data on a flat screen. Sure, one could argue that you're still only looking into flat screens in the goggles, but the multiple axes of movement allowed by both the head translations as well as hand movements give you superior control vs just mouse input.
If you're looking at two screens with identical images, then that might be a valid argument, but VR headsets provide stereoscopic viewing by presenting different images of the virtual scene from each eye's viewpoint-- this is known as binocular disparity. It's the same principle used in 3D TVs and anything that requires you to wear special glasses.
The "head translations" you're talking about gives the visual system depth cues via motion parallax, where objects in the foreground appear to move faster than those in the background when the head is moved from side-to-side.
These two things together (stereoscopy and motion parallax) yield a very strong sense of "3D depth", called stereopsis. Having controllers with six degrees of freedom (6DOF: translation along and rotation about the x-, y-, and z-axes) to manipulate and interact with 3D data should be superior, as it is no longer necessary to map 2D mouse inputs to 3D operations which would also decrease cognitive load, in theory.
1. Moving through the data set in room scale VR is completely different from seeing and manipulating a 2D projection of it on a screen. Exploration is much more intutive (just move your head!) and the perfect depth information that you perceive feels almost like an additional input channel to the brain.
2. Collaboratively analyzing visualized data with VR would require hacks that render abstract avatars of co-workers and remove a lot of the informational content of direct communication. With AR and a direct line of sight all of these obstructions just go away.
Is there a binary I can download and run? I don't know what version of Unity you used so I don't really want to compile myself if I can avoid it :)
EDIT - Answered my own question, it's available from here: http://calcflow.io/
I'd suggest you put this on Steam, folks - chances are you'd make your $100 Steam Direct fee back fairly quickly.
Someone should make a VR version for mechanical motions like that How to Make a Car course that was posted here a while ago
> This project is licensed under the NANOME VR PRODUCT SUITE
Digging a bit, it appears this is funded by an ICO, or at least created by a company currently running an ICO? Too bad that basically short-circuits to "smells fishy" to me right now; hopefully they can pull through and build a track record of credibility.
Is going to be notably tied to a custom eth token?
That idea does seem to contribute to the nonsense you mention.
Especially considering the README complaining about competitors being "unintuitive", but then just slapping `blockchain` on a visualization program
- - -
Also pardon my pedanticness, but its "Source available", most people consider "Open Source" (as a branding) to be open to access, changes, and use. You have released the code, sure. But you are massively restricting how that code is used (including preventing the program being used with other versions of the same program??)
https://opensource.org/osd for the "common" definition of the term
Do you know why the project is released under a custom license, rather than something familiar? If this could be summarized in a few simple sentences I'd appreciate it. Ain't nobody got time to read all that legalese.
For example: is the purpose of the license to prevent commercial re-use? In other words: what does this custom license accomplish that no normal OSI-approved license could?