Were they playing anything or just walking around in the demo? I have one and have tested it out with many people. When people get sick it's usually because they've been in one of the less interactive demos just sort of looking around wiggling the mouse etc. The people who I let loose straight onto Half-Life 2 or other games where there is some sort of goal rarely get the same type of nausea. I think this is worth taking into account when people say "it's nausea inducing".
For insta-nausea put someone in the Tuscany demo and move them around while they wear the rift. Hilarity ensues!
We did one demo that was more or less floating above static terrain, and another with Half-Life 2.
My own nausea didn't actually start up until HL2. We were playing on a laptop (probably not enough horsepower), using slightly unfamiliar keyboard & mouse controls. So some movements were very fluid & second-nature to me, while others were jerky and off because of the slightly-off control scheme.
It's possible the most nauseating moment was taking the headset off, actually. Up to that point, my mouse hand had been a pretty accurate proxy for my in-game hand and arm, but taking my hand off the mouse and then ripping the "world" away with that same (now "phantom") hand was deeply disconcerting.
But it's also possible the nausea built up slowly over the course of playing. I'd love to spend more time with it to see if it's something you really can adjust to.
On the subject of movement while wearing a VR headset, it seems to me that most successful games will be those where your character remains seated. Flight simulators, space combat sims, mech games, stuff like that.
For that kind of game, just the Rift and a joystick should provide an amazing experience.
Which attributes would you sample from a normal distribution here? I don't see any numerical attributes where this would make sense. One could add weight, height, age etc. and sample from the relevant geographical/gender distributions?
An example would be the processing of automatically acquired measurement data. Such data usually shows some kind of distribution (binominal, normal, geometric, etc...) that needs to be taken into account when validating such a tool.
In a mission to gather data then clearly extra megapixels give you extra data. You cannot argue against that! We aren't talking amateur photography here where the quality of the shot is important and a decent lens beats higher megapixels, we are talking acquiring measurements of the amount of photons in a discrete spatial region. Clearly a higher resolution of sensor might allow scientists to see something not visible in lower resolution images. Although I do admit that in this case I understand the choice due to specification and testing, but to suggest that extra megapixels do not give more information is silly.
Actually, the opposite is true. Photon absorption/detection is a quantum event, and limited by probability. For a given sensor chip size (and technology generation), fewer, larger sensels are going to provide samples that are statistically closer to the Absolute Truth. (Averaging repeated samples will reduce the error further.)
Using a well-corrected lens of an appropriately longer focal length, and thus a narrower field of view, with or without panoramic stitching, will provide at least the same linear resolution of a given subject, but with less sampling error.
 At least, since apochromatic correction is easier in longer focal length lenses provided that no super-wide-aperture bokeh heroics have gone into the design. Rectiliearity (the absence of barrel or pincushion distortion) is also easier to achieve. Flare can be reduced without inducing undue mechanical vignetting, increasing contrast.
Adding complementary features such as edges (I'm guessing this is what you mean by lines) tends to improve the accuracy. It would also be possible to do this in real time.
I don't have time to watch the full video so I don't know what features he is currently using, but in object detection nowadays most people are using some variant of the SIFT descriptor. These are built not upon edges but on the image gradient per-pixel. The current 'hot' feature in terms of frequency of use is probably Histograms of Oriented Gradients (HOG) which do exactly what they say on the tin: Take a region of the image and count how many times a particular gradient direction occurs and the total magnitudes. Slightly more difficult to run in real-time but libraries exist.