This is pretty cool. I could see Apple or someone taking this idea and running with it. Like Face ID, they could use the front-facing camera with a NN to determine the location of nearby light sources, then provide an API to render 3D and reflective surfaces in a environmentally consistent way. Perhaps not just reflective highlights, but actual reflected and refracted images from the camera.
Apple used to have a few of these sorts of effects scattered throughout iOS back in the skeuomorphic days; the volume slider in particular made heavy use of it
I love little effects like this. Skeuomorphism might’ve been a bit overbearing in iOS 6 but I’d really like to see it find its way back in a more reserved capacity. “Flat” and even the newer “flat + depth” is very dull.
It isn't working well on Firefox for Android (at least not for me). It is very laggy, so much so that I would say that there is no shiny effect at all.
I haven't been able to test this as my test device is too old. But, I'm betting my money on Firefox somehow rendering `background-attachment: fixed` differently from Webkit/Blink which is causing the slowdown.
I'm looking at switching to GPU accelerated transforms instead but that brings a couple of new challenges.
It's a lot slower in Firefox, but I find the effect looks nicer in Firefox (if you ignore the lag). The rendering is faster in Chrome, but the effect appears much more subdued to me.
Nuts, that doesn't sound good, I haven't been able to test on Android because my Android device does not have a Gyroscope :\ Hope to fix this next week.
The website says it is untested on android and it shows how weird and inconsistent browsers are.
On android firefox it looks like the iPhone example but the light movement is very choppy (maybe hardware acceleration is missing or something like that).
On chrome the movement is smooth but the markup is not rendered correctly.
maybe hardware acceleration is missing or something like that
It's doing some seriously slow stuff behind the scenes. For example, when the library initialises it creates a 64 * pixelDensity square canvas and then loops through all the pixels in it one by one using a randomly generated transparent color value to draw a 1x1 rectangle to each one (See the 'generateNoise()' function in https://github.com/rikschennink/shiny/blob/master/src/index....). That's just slooooow. HTML5's canvas API has a putImageData method that is much faster because all you're really doing when you fill an image is assigning values to a UINT8 typed array with 4 values per pixel.
It'd be relatively straightforward to achieve this effect using WebGL and a quad to overlay the element, and just pass in the orientation data as a vec2 uniform. That's how some of the effects in my React Neon library work (https://react-neon.ooer.com/) for mouse coordinates. Although, that said, Neon is Chrome-only because it relies on ResizeObserver at the moment. One day I'll fix that...
It does that once, and it does it to prevent having to ship an image or base64 encoded string to create a noise effect. It's a balancing act between a bigger library and CPU usage. It takes the CPU 15 milliseconds (it's more than I expected, but it's on page load). Good to know that putImageData is faster, will look into that.
What is most likely causing the chopiness is that it's animating background gradients. There is a lot of room for improvement there, instead of animating the background gradient we could transform a layer instead.
Cheers. It's the same in OSX Firefox (and everything that isn't Chrome) as "window.ResizeObserver is not a constructor". When I get time I'll fix that.
Based on the description I thought it was a kind of "fake glare effects for testing website readability"-thing, but it's more like another lib for the https://github.com/dsalaj/awesome-quirks list, I guess :)
Quick feedback: I got a little confused on the Github readme because the gif shows a video record of the phone. The physical reflections on the glass of the phone seem to be masking the virtual reflections on the credit card image!
Once I went to demo website on my mobile phone everything made perfect sense!
Agreed with this comment -- I starred the library, its very cool, but you could show it off better with a 3d model that the user could move around with their mouse (not sure how hard that would be to accomplish, haven't looked at the code yet).
I'm kinda hesitant to do this as it's not really what you expect to happen on a device that is stable? Where on mobile it responds to the tiniest twist or turn of your hand which results in familiar reflection behavior.
Admittedly, I didn't read the source code, but looks like this would use the gyroscope API (edit: it does)? Probably wouldn't use this profusely, but it seems like a nice addition for things like credit card inerfaces. Pretty neat!
Thanks! Indeed it does use the gyroscope. Unfortunately (as always on the web) the implementation apparently differs a lot per device so I'm now gathering test devices to make it respond the same on iOS and Android.
Thanks for testing! Code is not really optimized at this point in time, I'm looking in to switching to GPU acceleration which should improve performance a lot.