Actually, the 3D view that 2D man does not understand but which we do understand is... still 2D. My screen is flat.
You can use a 2D viewport to render a 3D scene in a way that is natural and easy to understand for us humans: A human watching the 2D scene can very quickly surmise from a glance at the viewport: Which objects are in the scene, and where are they located, in _ALL_ 3 dimensions?
This raises the question:
Can you render a 4D scene onto a 3D viewport such that us humans are pretty good at understanding where every object in the scene is, in all 4 dimensions?
I assume the answer is 'yeah, you can do that'. I wonder what that would look like.
It's complicated of course; our eyeballs are involved and they kinda work in 2D and not 3D; where we humans can casually glance at a 2D viewport rendering a 3D scene for a second and know what's happening, we'd have to walk around the 3D screen rendering a 4D scene in order to even see everything.
Even though there's no shading, a bad resolution and zero context anchoring it in a familiar setting you immediately see it as a 3D object when effectively that's just 3 2D quadrilaterals changing colors and shape. We really need very little to be able to extrapolate depth information from a very limited 2D-only display.
If you did that with 4D cues in 3D space (say in VR or something) I think it wouldn't work. It'd just look like a 3D object changing shape, not a projection of a higher dimension (the same way an hypothetical 2D being would see 3 quadrilaterals changing shape in the above video, not a solid 3D object that would be an abstract concept to them, not a familiar reality).
Could we teach our brain to "think in 4D" and interpret these cues differently? To a certain extent probably, maybe if you spent hours and hours and hours in a VR simulation with 4D objects you'd start getting a feel for it. I doubt you'd ever get as good as with 3D objects since we've been exposed to these literally every waking moment since birth, but maybe I underestimate the plasticity of our brain.
That actually leads me to an other question: is 3D somewhat hardcoded in our brains or is it purely learned? If we were mad scientists and took a baby brain into a 4D "plato's cave" style simulation, could it grow into being able to perceive 4D as intuitively and effectively as we do with 3D space? Also unrelated question: does anybody have a baby I could borrow?
Good lord! Has anyone tried?
This reminds me of the backwards bicycle video on youtube (https://www.youtube.com/watch?v=MFzDaBzBlL0). I... really want to try what you're suggesting.
"So here's why I did. It was a personal challenge. I stayed out here in this driveway and I practiced about 5 minutes every day. ... after 8 month it happened ... in two weeks he (the son) did something that took me 8 months to do".
Well, if you were serious about learning a skill you wouldn't just do one 5 minute session per day for eight months.
A more appropriate training regime would be way more intensive than one 5 minute session per day. Are we to believe he limited his son to one 5 minute session a day?
Do children really learn languages quicker than adults? By age 3, children will probably have words for almost everything. Babies might even say mama and dada by 6 months of age.
Children have several language learning advantages over adults: complete immersion; it is imperative they learn; effectively unlimited time; no responsibilities.
But, it takes them years to learn their native language to basic competency.
Compare: an English speaker can learn Afrikaans, Danish, Dutch, French, Italian, Norwegian, Portuguese, Romanian, Spanish, or Swedish to General Professional Proficiency in Speaking and Reading in 600 hours carried out over 24 weeks (25hrs per week). Cantonese, Mandarin, Japanese, Korean, or Arabic will take 2200 hours over 88 weeks (25hrs per week).
Is it easier for a child? Yeah probably, they don't even have to look after themselves.
If I could live as a child in a foreign language house in a foreign language city, with only two tasks: learn to speak and read the language to basic competency, and learn to ride a backwards bicycle, I'm firmly of the believe I could out pace the 95th percentile of children at both tasks.
As an aside, he says his son is the closest person to him genetically, but aren't his parents both equally as close to him genetically as his son?
And any siblings.
So it only took him about an hour, over three days, to get it worked out well enough to not immediately fall off, and an hour and a half over four days to get to 50 meters.
There's a comment in on the YT video from Destin saying "the RC helicopter pilot was able to learn it in about an hour but I don't think his brain is normal" - good point, the RC helicopter pilot has more experience understanding reversed input while the helicopter is flying toward him, also he had the opportunity to watch Mike learn first.
I reckon the learning process could be sped up even more by taking the pedals of, turning it in to a balance bike, learning how to ride it down slight inclines, get that sorted then putting the pedals back on.
It’s possible to represent n+1-d scenes into n-d scenes.
This is an old philosophy debate about wheather if I blind person could suddenly see if they’d understand that a sphere is round just by looking at it.
When some people that had had cataracts for 50 years were treated they not only couldn’t tell that a sphere was round, they couldn’t understand shadows or depth of field/distance. They thought shadows on people were black splotches and when something was moving away that it was actually getting smaller.
I suspect visual input gets trained on our neural network like anything else, though we do have some specialized hardware for it.
Actual 3D-sight might include us seeing the inside of objects, what's behind them, etc.
We're aided in converting our 2x 2D sight into 3D mental models through the use of color and parallax. In projections on images, it's easy to just use color and get rid of parallax, and still get sort of close.
Suppose we tried to make a 3D projection of a 4D world, we don't actually get any significant new tools. We're still going to be limited by having to use parallax and color. This doesn't allow us to add enough information to add that 4D projection into 3D as neatly as the 3D into 2D works. At best you could try to distribute the information somehow. For example by giving information about the third dimension through brightness and the fourth dimension through hue. It certainly wouldn't work nearly as well though.
When I was in college I was fairly decent on the ping pong table, call it high amateur. I also have very very bad vision - -5 diopters in one eye and -6 in the other. At the time I wore contacts and I was also really really poor. At some point I lost one of my contacts and for six months I was effectively blind in one eye for the purposes of ping pong. It took me about a month, but fairly quickly I was playing at my old level without anything close to stereo vision.
It turns out that stereovision is the least important of several depth cues humans use, and is only really effective out maybe 10 feet or so--beyond that the parallax is too small to give much useful info, and we rely on apparent size, relative motion, surrounding context, etc. I can catch a ball just fine, as an adult.
It's really only an issue within arm's reach, and then only when I'm distracted. Occasionally I'll reach for something without paying attention and miss by, like, a foot. Also, I can't use 3D glasses. (With the new polarized kind I can at least wear them and see a normal 2D movie. With the old red/blue kind, everything would be either red or blue, depending on which eye I was using.)
You might have an advantage with not being able to watch some 3D movies - you can tell your friends/family that a medical condition prevents you going to them.
I have comparatively uninteresting/normal vision (biggest issue shortsightedness) but I have this too. My left eye sees more blue, my right eye more red. Looks like the cone distribution wasn't perfect.
I'm not sure, but I think the sun's rays don't have much of a blue component: my left eye gets fractionally less sore from bright sunlight, so if it's really bright out I'll generally be closing my right eye.
Once I got the new glasses I was astonished at how flat and "PlayStation-like" everything looked. My depth perception was thrown off and everything looked much closer to me than it actually was.
I think my brain was using blurriness as an indicator of depth, in addition to binocular vision, and with my myopia corrected it lost that information and had to readjust to relying primarily on binocular vision to gauge depth.
Most of us do. But people who only have sight in one eye have no difficulty perceiving three dimensions. And some fairly large percentage of people with binocular vision are stereoblind, typically without even realizing it.
If anything, I'd say it's not that
> We're aided in converting our 2x 2D sight into 3D mental models through the use of color and parallax.
so much as that we're aided in converting our 2D sight into 3D mental models through the use of stuff like color and parallax through the use of stereopsis.
Those without binocular vision can have a fairly large range of non-specific symptoms and perception problems. There is an interesting discussion of the change in understanding in this link.
Heinz von Foerster’s 1970-1971 experiment at the Biological Computing Laboratory for apprehending the fourth dimension is unique ... combining four dimensional geometry, stereoscopic vision, and joystick manipulation of objects on the screen. ... The fourth dimension was chosen as the knowledge to be acquired because there was no chance that any subjects would have attempted such knowledge before the experiment. By allowing the physical “grasping” of the visual object, where one hand coordinated movement on three axes in the 3rd dimension, while the other similarly controlled three axes of movement in the 4th dimension, subjects were able to intuitively figure out that the strange succession of transforming 3D objects they were seeing (with 3D glasses) were cross-sections of a single 4D object.
Disclaimer: I studied with von Foerster years ago, and he'd mentioned this. I only today found this online reference. I want those 4D Toys. Steam, here I come.
and even better, this: https://youtu.be/dy_MUfBuq2I
Here is a good intro to 4D Visualization
Now I see this on HN :)
So I searched and found this: http://www.urticator.net/maze/
It seems to be exactly what you propose, a 4D world rendered into 3D space. It even has a kind of "stereo" mode, enabling a 3D experience on a 2D moniter.
And to expand on the idea: Once we become familiar with 4D, we can continue and use it to explore 5D, can't we?
It seems pretty clear that looking at 2D images (and in particular, still ones) is a learned task, like reading, rather than an innate one. Both appear to tie into deep structures in the brain, but both are very recent inventions.
From the perspective of developmental stages described by Piaget, children learn to view in three space primarily with objects within reach (parallax pretty much peters out around the ends of your arms. Once the child becomes mobile, she is able to use semantic understanding to estimate the size of distant objects and get a rough idea of distance. The whole human process of seeing is very different from the way, say, a NN is trained on an image: the whole thing isn't gulped in at once, but we foveate on various parts of the image and assemble / confabulate a whole. You can see this in the structure of Chinese classical painting or pre-persepctive European paintings: distant objects aren't sized in any way proportional to their apparent size. This really maps more to how much attention you pay to various objects in the scene.
You then learn to map that into a 3D model which I believe (but am not digging up refs this instance, sorry) has hardware support.
Thus the 2D->3D process exploits a lot of learned and innate knowledge and technique that you have already developed. With one exception you haven't any 4D experience. That one exception is temporal data -- we can easily extrapolate from, say, a sphere shrinking and growing. Apart from that, there isn't much to work with.
So a 4D rendering is just more information in a 2D flat screen. Your brain can't process all of this information. So it "can't see it". But it is there.
+dunno if these coordinates translates to a cube or just some 2-3D object but I hope you get the point.
It would look like VR. 4D toys supports VR already. I tried it but it did not advance my understanding further than the 2D version, although using 3D tracked controllers is definitely a big improvement over a 2D mouse for manipulating 3D objects or 3D projections of 4D objects.
E.g. When you look at a picture with something that looks like a chair, you assume that it’s indeed a chair, and then you can estimate its size/pose/etc. But there are infinitely many non-chair shapes that would produce the exact same projection. It’s just that you won’t encounter them in real life, except maybe in trickshots like this: https://youtu.be/SKpOKXAVjGo
I've tried a maze game and a 4D space shooter before and I could never wrap my head around them. I don't know if it was just poor representations or if it was just because my brain is incapable of understanding.
I played with the 4D toys app after it showed up on /r/math a while ago. I like it and I think it's useful. My only complaint would be that it's a little too open ended. While it's nice to provide a simulated tactile experience of four dimensions, I think the app should provide a bit more visual intuition. That's one of the things I like about this video.
Step 1: Use Schlafli generator from here . Schlafli numbers are a compact description of regular polytopes, and there is a recursive algorithm to generate vertices, edges, faces, etc. from them. The base case of the recursion is dimension 1, so you make 4 calls to get to dimension 4.
Step 2: Intersect the edges of the polytope with a hyperplane (a 3D subset of 4D).
Step 3: You get a set of 3D points out of step 2. Draw the convex hull of them, which gives you triangles.
Step 4: Render the triangles somehow. I used matplotlib's 3d facilities (mplot3d), and we are working on raytracing them.
Step 5: Animate over different hyperplanes. Take the min and max in the w plane and that will give you non-empty slices. Now you can "see" the 4D polytope using time as the 4th dimension.
I sure he is doing something more advanced (4D collision detection), but this is all we needed to reproduce something that looks kinda cool.
I can represent 3D quite comfortably on 2D monitors, can there be an intuitive mapping of 4D to a 3D VR view?
I know 3D mapped to 2D suffers from occlusions and heavily relies on clues like perspective, shadow etc. But given enough time even a less intuitive 4D view could become intuitive with time, too.
edit: found this: https://youtu.be/S-yRYmdsnGs?t=252
even better: https://youtu.be/dy_MUfBuq2I (turn on subtitles)
Additionally, most voxels would appear different with different view perspectives. Due to more or less voxels covering them.
The problem is that in 4D all voxels are visible to the viewer. So viewing a 4D apple would allow you to see the apple from all possible view points simultaneously, including interior views.
To me it just doesn't seem possible to replicate this concept in 3D VR.
Here's a video of a rotating 4d hypercube in a 3d perspective:
It's called a tesseract, and as each face of a 3d cube is a 2d cube (a square), each face of a 4d cube is a normal 3d cube, that we see skewed by the perspective.
Your linked video maps the 4th dimension to time, it doesn't project to 3D. Projecting through time (especially non-interactively) lacks the immediate feedback needed for the brain to grasp it as intuition.
No, it projects it to 3d. The movements you see are rotations, not movements of an intersection plane. You can clearly see at any point in time each of the 8 identical cubes making up the tesseract, skewed and resized by projection to a 3d perspective.
I'd give a 50% shot it lands on Steam before the end of 2019. (35% it doesn't make 2019 but lands before the end of 2020, remainder that it lands later or never releases.)
My gut feeling is that we are not there yet, and that 4D toys is an attempt by the author to monetize his development tools in order to be able to complete the main project. I hope it turns out well, Miegakure is definitely in my watch list.
* but remember that going from 13D to 14-D creates as much extra complexity as going from 2-D to 3-D
It really makes me want to try it out. I wonder if it's really the same without VR.
 https://en.wikipedia.org/wiki/Mimsy_Were_the_Borogoves (watch out, there are spoilers here!)
Klein bottles are the same thing but with an added dimensions: any representation of a klein bottle in 3D makes it look like it's going through itself, even though in 4D it wouldn't: http://s3files.core77.com/blog/images/2013/06/klein-bottle-0...
It's also true if you remove a dimension: a 1D moebius strip would simply be a circle, but if you try to draw it in 1D you end up with a segment where both halves of the circle are overlapped. So every time we have a N-dimensional object that can only be properly represented in N+1 dimensions.
That's also the same reason you can't solve the problem of connected three objects to three other objects on a 2D planes without intersecting:
A B C
X Y Z
Topology is fun.
I guess this slicing technique works but it would be a bit weird.
It helps that there is a little visualization that is just a line-per-object showing where you are, and where all the objects are intersecting the 4th dimension, that you also use to move "back and forth."
One thing I found myself doing was grabbing objects at one of their edges in the 4th dimension, by moving myself to near their boundary, and then using them like brooms. It's really easy to understand with the case of a hypersphere, since at its edge it's just a smaller sphere than at the middle. So you grab that small sphere at the edge, and push in the direction towards its middle in the 4th dimension, and it will act like a bowling ball. You won't see the stuff you are pushing around because the sphere is "ahead" of you, unless they roll around the sphere, then you'll pass them. Once you reach the edge of the 4th dimension, all the stuff you kept pushing will be there.
Predicting how 3D intersections change as objects rotate about in the 4th dimension still seems like chaos to me though, except in the case of hyperspheres, which basically don't change as they rotate, but I only played around for about an hour or so. The only way I found to rotate objects in the 4th dimension was to have them collide with each other, or the walls and floor, which makes it kind of hard to carefully experiment with their rotations.
New mathematics no less!
A person looking back upon the three-dimensional world from four-dimensional space for the first time realized this right away: He had never seen the world while he was in it. If the three-dimensional world were likened to a picture, all he had seen before was just a narrow view from the side: a line. Only from four-dimensional space could he see the picture as a whole. He would describe it this way: Nothing blocked whatever was placed behind it. Even the interiors of sealed spaces were laid open. This seemed a simple change, but when the world was displayed this way, the visual effect was utterly stunning. When all barriers and concealments were stripped away, and everything was exposed, the amount of information entering the viewer’s eyes was hundreds of millions times greater than when he was in three-dimensional space. The brain could not even process so much information right away.
In Morovich and Guan’s eyes, Blue Space was a magnificent, immense painting that had just been unrolled. They could see all the way to the stern, and all the way to the bow; they could see the inside of every cabin and every sealed container in the ship; they could see the liquid flowing through the maze of tubes, and the fiery ball of fusion in the reactor at the stern.... Of course, the rules of perspective remained in operation, and objects far away appeared indistinct, but everything was visible.
Given this description, those who had never experienced four-dimensional space might get the wrong impression that they were seeing everything “through” the hull. But no, they were not seeing “through” anything. Everything was laid out in the open, just like when we look at a circle drawn on a piece of paper, we can see the inside of the circle without looking “through” anything. This kind of openness extended to every level, and the hardest part was describing how it applied to solid objects. One could see the interior of solids, such as the bulkheads or a piece of metal or a rock—one could see all the cross sections at once! Morovich and Guan were drowning in a sea of information—all the details of the universe were gathered around them and fighting for their attention in vivid colors.
Morovich and Guan had to learn to deal with an entirely novel visual phenomenon: unlimited details. In three-dimensional space, the human visual system dealt with limited details. No matter how complicated the environment or the object, the visible elements were limited. Given enough time, it was always possible to take in most of the details one by one. But when one viewed the three-dimensional world from four-dimensional space, all concealed and hidden details were revealed simultaneously, since three-dimensional objects were laid open at every level. Take a sealed container as an example: One could see not only what was inside, but also the interiors of the objects inside. This boundless disclosure and exposure led to the unlimited details on display.
Everything in the ship lay exposed before Morovich and Guan, but even when observing some specific object, such as a cup or a pen, they saw infinite details, and the information received by their visual systems was incalculable. Even a lifetime would not be enough to take in the shape of any one of these objects in four-dimensional space. When an object was revealed at all levels in four-dimensional space, it created in the viewer a vertigo-inducing sensation of depth, like a set of Russian nesting dolls that went on without end. Bounded in a nutshell but counting oneself a king of infinite space was no longer merely a metaphor.
About the same as the use of novels, music, paintings - stuff like that...
Now where is my noble price?