I haven't tried your one, but OMDO (linked in GP) is an unplayable nightmare dizziness-inducing headache.
Perhaps have a mode where the close/far stuff is swapped so it looks right to those of us who cross our eyes to see these.
Crossing your eyes is much more natural, and you can easily do it by looking at something close to your face.
Diverging is much harder, at least for me and most other people I’ve talked to.
So I can see these by crossing my eyes but everything is inverted. It comes out as a front layer with stuff cut out of it and behind.
However, when you've mastered it, and when the offset (i.e. size) is correct, it's much easier to snap into than the cross-eyed viewing, and much easier to stay in focus while looking at different parts of the image.
I like to look out a window at something distant, like a tree or car outside, then adjust my focus on the window pane itself (perhaps the little flecks of dust on it, or better yet one dot of paint on it).
That is a very natural scenario, and at least for me. super easy. I've done it for decades though, so maybe it's just natural for me, lol.
I can FEEL my eyes slightly squeezing to change shape (and thus change focus), and at this point it's simple muscle control, and it's fun.
Stereograms are NOT meant to be enjoyed from across a room.
It's REALLY hard to 'catch' a stereogram from far away, so why are you even trying?
However, the "focusing on-or-through a transparent thing" that WhiteSage mentions works well at slightly further distances and can be used to help you figure this stuff out.
As for which one is easier: For some reason, I have always been able to do the “uncrossing” on command, and it feels more natural and more relaxed to me. I sometimes do it to compare almost identical things that are next to each other. Crossing my eyes feels straining.
For me, everything appears as it should.
Here's one place where this is discussed: https://www.reddit.com/r/MagicEye/comments/2u0cc1/inverted_i...
1) Higher frame rate will probably help - it will look even more like old TV snow wrapped around things.
2) Color per object should be possible but it will bleed. This may or may not help the effect.
3) Possibly change the dot size based on distance (some dots are seen at two different distances, so average them? This would be similar to how color might be done. Advantage would be that the snow would be more like a texture that gets smaller further away.
4) I'm not sure about the notion of "base image" I always assumed a method of rendering that may not be what others do.
It had a magic eye feature, which I haven't really seen before or since so it's neat to see a modern implementation in the browser.
filter: contrast(.3) sepia(405%) hue-rotate(273deg);
Then view full screen (F11).
It fills your screen with a red toned version, with less contrast which I prefer. These things are highly personal so I suggest you experiment to find settings you like. Setting image rendering to pixelated is really personal as well, as some will prefer the softer look of blurring, but it helps me lock on to the stereogram.
Question, did you experiment how to reduce the flickering? Or use of colors?
https://en.wikipedia.org/wiki/Autostereogram will tell you more about this.
http://www.magiceye.com/faq_example.htm has some helpful instructions (you're looking for a planet with rings)
Would be useful if it had some markers for achieving the correct amount of focus.
In this example, I always see two squares when trying to look behind it, never 4 like the instructions require.
How can I let my eyes focus behind the screen? I can focus my eyes in front of it by looking at my finger, and get 4 squares that way, but that doesn't work for focusing behind it, for me.
No planet appears when making it 3 squares while looking at finger in front of it either for me. And if I remove my finger, my eyes immediately change their focus :/
I spent ages trying to make these things work in the 80s or 90s whenever the posters were the craze. Never got close, and there were quite a few others they simply didn't work on.
Does anyone know alternative non-random dot approaches to autostereogram generation?
Seeing the random dot approach in video highlights the lack of temporal coherence across frames.
1 - https://moefh.github.io/stereogram/index.html
Does anyone else have this experience? Is there something physically different about my eyes or brain compared to people who can see them?
Go to the paper: https://www.cs.waikato.ac.nz/~ihw/papers/94-HWT-SI-IHW-SIRDS...
Go to figure 7, "Figure 7. Stereogram showing a hemisphere", on page 24. There are two black dots near the bottom. Cross your eyes: you should now see 4 black dots (2 per eye). Adjust how "crossed" your eyes are such that the "inner" two dots line up, s.t. there are only "three" dots, like `(dot) (two overlapping dots) (dot)`. Now, the hard part: your vision likely blurred when you crossed your eyes; without uncrossing your eyes, and maintaining the position/overlapping of the dots, bring your vision into focus. (This is, IMO, the hard bit.)
Once I've got it into focus, I find my vision "stabilizes": I'm able to look around w/o having to concentrate on keeping my eyes crossed to a particular amount. At this point, I can look up above the dots, and there is a hemisphere in the image, as the figure caption says.
This doesn't have to be done w/ magic eye images, either. Two normal 3d renderings of a landscape side-by-side, where the camera in one rendering slightly horizontally offset from the other, will also work. The idea is the same: overlay the images in your visual field, then bring it into focus.
At least, that's what works for me. According to the sibling comment from chubasco, there's more to it, I guess, but the above is all I know.
In this case, I can stabilize, but I feel like I'm still seeing the inverse of what I should be seeing. The hemisphere looks like it's carved into the image (with various bands that I assume are shading). The very center looks like a hole was cut out and I can see "behind" the hemisphere. I assume this is supposed to be a highlight and the nearest point to me, but I see it as the furthest.
Even when your eyes are pointing in the right direction, you eyes' lenses can be incorrectly focused. In order to see the image, you have to hit the correct combination of focus and parallax (which will disagree with what your brain is used to seeing).
> Is there something physically different about my eyes or brain compared to people who can see them?
This is possible, but probably unlikely. If you can't perceive depth, e.g. neurologically incapable, or you have less than two eyes, this won't work for you.
But a couple years ago I happened upon a magic eye image while laying with my laptop on my chest putting the display quite close and things just clicked. Now I'll spend hours flipping through magic eye images, it's only gotten easier with practice and the effect is incredible considering there's no drugs involved.
I always wondered why other kids could do things that I couldn't, so I guess I spent an excessive amount of time trying to catch up :)
My version shows a simpler scene (a single tumbling cube) and also works with random-dot rendering, but also supports rendering with a tiled image (I found that tiled images work better for static 3d scenes while random-dot works better for animated scenes).
a) Is it possilbe to render color images this way?
b) Is it possible to have the random dots not change all the time? E.g. When I don't move in this example, I dont see the need to draw new random dots. Also when moving sideways e.g. would it be possible to keep the dots where they are, relative to the world?
However, this color image will not correspond in any way to the 3D geometry of the scene. I don't know of any way to get color information to correspond to BOTH the 3D geometry and the stereographic information (without using two images).
Regular Magic-Eye pictures do this too, but to a much less noticeable degree.
With the image constantly changing, there is no point to focus on. I kept double-crossing my eyes because there was no reference.
The eye can pick up very small sub-pixel shifts when doing parallax solving. Good antialiasing would be necessary for a smooth/continuous perception of depth.
That way we could actually interact with a computer while staring at a screen of randomly changing characters, just like in the movies.
Alas it does actually make me nauseous :(
I enjoyed editing a hole in front of me, falling in and editing a staircase to get back out.
now I have to find the easter egg