Hacker News new | comments | show | ask | jobs | submit login
A Canvas Made of Pixels (claybavor.com)
525 points by robk 499 days ago | hide | past | web | 105 comments | favorite



>> A fun (but impractical and frustrating) variant of this feature is to have the image change as soon as the viewer looks away. So you’re looking at a painting, glance away to another room, and look back to find a new painting hanging on the wall.

Re this comment. There is this thing called "saccadic masking", the gist of which is that we are effectively blind while our eyes move from focusing on one position to the next. Depending on the "distance" (angle) traveled by the eye, this can last up to tens of milliseconds. Enough time to do some cool stuff!

One of the studies that conclusively showed this effect, a fairly long time ago, had participants wear some elaborate headgear that allowed the researchers to track where the eyes of their participants were looking at. They had them look at a standard sentence like

"the quick brown fox jumped over the lazy dog",

but with a twist! Every word in the sentence was masked, except for the word the participant would be focusing on at that moment. So if he focused on the word fox, the screen would show

"xxx xxxxx xxxxx fox xxxxxx xxxx xxx xxxx xxx".

Whenever the system detected a saccade by the eyes, they would recalculate which word the participant would be looking at (e.g. switch to dog), and changed the display of the screen accordingly, now showing

"xxx xxxxx xxxxx xxx xxxxxx xxxx xxx xxxx dog"

Participants were asked if they noticed anything strange about the sentence, but they reported there was nothing strange about it! (Disregarding having a heavy set of mirrors strapped to your head).

My point: If you were able to detect saccades of the eyes, and relatively accurately calculate their position, you could have the region of the image that's unattended be colourful noise, the theory says that your visitors would be none the wiser (of course this would break down with multiple people looking at the same image)

Even cooler: procedurally re-generate parts of the image that are unattended, so that you're looking at an ever shifting image, but wouldn't quite be able to pin down what's happening. Similar to this video: https://www.youtube.com/watch?v=ubNF9QNEQLA


This is also the principle behind foveated rendering (https://en.wikipedia.org/wiki/Foveated_imaging) where you only render the graphics in high detail where the viewer is looking, and use a low-res image for the rest with the goal of saving computing power. Big area of development right now for VR!


That's really cool! I'm not 100% sure, but what I remember from reading a few psych papers about this is that as long as the "mask" is similar to the original in a few easy to calculate sampling statistics (colour distribution, overall hue), it doesn't actually need to look anything like the original image, even calculating low-res imagery shouldn't be necessary. Our peripheral vision is just that bad (at least for static imagery, movement detection is actually pretty good). The foveated part is where the magic happens, and we've gotten pretty good at fooling ourself into believing this "foveated" part is much larger than it actually is.


Hey, something I know about! I actually worked on one of those research projects in college, programming the experiments. The idea is that there is a certain radius around the focal point where you stop being able to detect changes. I'm not sure what the final results were, but the theory was that you can calculate how blurry an image can be and still be discernable based on how far it is from the focal point of your vision. It was surprising how good people are at detecting changes in a blurry picture that's way out in their periphery.


My prof used to joke that, whenever he'd discuss fovea and periphery, that if we had a fovea the size of our full field of view, we'd need a brain the size of an elephant to process all that information. It's interesting how we're very sensitive to sudden changes (thus movement) in our periphery, but are so bad at classifying/identifying static imagery.

I remember reading about right eye and left eye dominance. Where they'd keep an image on the screen saccade invariant (ie, compensate for any saccades that were made). Slowing moving a letter/character/word/whatever to the edges of the participant's field of view and asking when the character was no longer legible. This happened surprisingly quickly, but at different positions for the left eye and right eye for pretty much all participants..


Heh - I remember back in the very late '80s or early '90s, my dad was working on the flight simulator for the "swing wing" F111E's new avionics package that Australia was getting.

The sim cockpit had a pair of Silicon Graphic Reality Engine^2s - one driving a projector that lit up a 5 or 6 meter diameter 1/4 spherical screen at low-ish resolution, and another driving a projector that was aimable and slaved to the pilot's helmet (and maybe even eye tracking, I can't remember) that projected a small patch of high resolution imagery exactly where the pilot was looking. If you knew what it was doing, it was easy enough to "catch out" that system and see the edges where the two images joined, but once immersed in flying it disappeared completely. It was spectacularly obvious what was going on if you were watching the screen while someone else had the helmet on.

I _so_ wanted one of those Reality Engines back then, I suspect my phone now has more graphics processing power though (I'm pretty sure my Galaxy S6 in a Gear VR does a significantly better job than that multi million dollar military project ~20 years ago...)


The SGI "Reality Engine" was also the GPU that ended up in the Nintendo 64, wasn't it? How much more powerful was the RE2?


It's always seemed to me that we could get much higher-quality VR if we could manage to set up a foveated display: a high-DPI display embedded in concentric rings of progressively-lower-DPI displays. It's easy enough to get quality-control clamped for a 600DPI display if you only have to make them a square inch in size; and your memory bandwidth and parallel processing needs go way down if the outside rings can actually be treated as a small screen, rather than as an extremely-high-resolution screen displaying a (monotonous and blurry) image.

Of course, the big obvious problem with such a display is that the eye moves, and moves faster than you could possibly move around the display. The real key, I would think, would be something equivalent to a metamaterial convex lens, that could be "tilted" and "flexed" in the same way the lenses in our own eyes can, to redirect and "refocus" the centre of the image to the new eye position without actually moving it per se.

We already have a technology to achieve this sort of "tilting" and "flexing", it turns out: magnetic deflection, as seen in CRTs. No reason you couldn't use it to deflect a continuous parallel matrix of rays by a constant amount, rather than one continuously-shifting beam. Heck, you could use an array of coherent emitters (laser diodes) rather than point-source diodes, and use phosphorous on the intermediary panel like the good old days.


I've seen this idea used in machine learning as well.

There was a paper [1] whose goal was a binary classification of the center pixel in a region of interest. Interestingly, their results improved when they applied a foveal blurring surrounding the pixel to be classified.

[1]: http://people.idsia.ch/~juergen/nips2012.pdf


Interesting! Thanks for sharing!

There are algorithms that mimic fixation paths the eyes follow when presented with a novel image, very much related to modeling dopaminergic systems. It seemed to find task-relevant information dense areas first, and then slowly spread out to less information dense areas. I wonder if there'd be any benefit to running these algorithms on images, basically turning them into a video, and then running classifiers based on this video (with or without foveal blurring


Do you have any more information about these algorithms? Or perhaps links to a paper or two? It gives me a couple ideas and sounds very interesting! I do wonder as you do as well. Feeding an image to a neural network instead of as a single input, but as a series of inputs separated over time, 'reusing' the same neurons for different portions of the image might allow for interesting feedback to develop.


Apparently I can reply now, repost just so it shows up in your comment thread

I remember seeing a video of a robot that would attend to different parts of a scene based on the "saliency", in the sense of novelty, of its features. I can't find the specific video, but I think the model running the robot is related to: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3930917/ The paper is quite dense, as it describes biologically plausible models of dopaminergic systems, which makes the model quite complex as well, but it's interesting because this system is considered quite 'low-level', no cortex involved. I'll add more if I can find any..


@otakucode The thread has gone too deep, so I can't reply to your comment directly.

I remember seeing a video of a robot that would attend to different parts of a scene based on the "saliency", in the sense of novelty, of its features. I can't find the specific video, but I think the model running the robot is related to:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3930917/

The paper is quite dense, as it describes biologically plausible models of dopaminergic systems, which makes the model quite complex as well, but it's interesting because this system is considered quite 'low-level', no cortex involved.

I'll add more if I can find any..


I just posted this in another comment, but M$ research has awesome foveated rendering: http://research.microsoft.com/apps/pubs/default.aspx?id=1766...


I got a live demo of this at a conference a few years back. There was a line to see the demo, so I first waited while others did it. They first showed the full resolution image, then enabled the foveated rendering. The latter looked terrible - a tiny patch of hires imagery and a massive blur for everything else.

When it was my turn, they started with the full resolution image. I was waiting for them to enable the foveated rendering - until they told me it was already running. I could not tell at all that it was not all rendering at full resolution. Really impressive.

The main presenter mentioned that the optic nerve/brain processing 'shuts down' for up to 40ms during a saccade, so they have that long to render the small region at full resolution between the time when your eye has its new target and the optic nerve comes back on line.


I'm so glad to hear this! Do you know of any progress since then? That very closely reminds me of my first time trying VR (last week). I sat and watched on the 2D monitor someone else doing it, and thought it looked incredibly stupid. Then I put on the headset..


I don't know of any explicit details of progress, but everyone I've mentioned it to in the VR space nods knowingly when I mention it, so I take that to mean that it is an area of active research and development.


I am pretty excited about this, I could see this being used with an variant of DLP technology to deliver ultra-high resolution and pixel density only to the area of screen you are looking at.


Real-time raytracing would benefit so much from that.


I apologize for the stupid response to your intelligent and well-written comment, but this reminded me of the classic "Creepy Watson" video, playing around with a poorly implemented follower AI in a budget video game. Audio not required, but recommended. https://www.youtube.com/watch?v=13YlEPwOfmk


Crimes and Punishment, the follow up to that game is actually quite good. There's a new one out this year called The Devil's Daughter.


Thanks for the explanation, that was great!

I imagine you could play Where's Waldo without changing screens. Just "find the next Waldo" on an ever changing scenario. The moment you find one, another is created at other spot.

Could be very addictive, although sounds kind of stressfull also.


You could also just have it decide exactly when it wants you to find Waldo. Imagine you had a Where's Waldo ipad app that had 20 different "pages" and you could set the amount of time it takes to get through them all so you could keep your kid busy for a guaranteed minimum time. :)

Or even better, it's listening for you to say "ok, it's time to go in two minutes. Finish up," and it makes sure Waldo appears where the kid is looking just in time.

The downside is when your kid finds out he's going to murder you.


That's brilliantly evil. Part of me wants to implement it just for the evil factor.


+1 for evil


Or it places Waldo just outside of your field of focus, and constantly moves him as you look around.


It would be easier if each Waldo was looking or pointing towards the next one.


Found the article I was referencing - McConkie & Rayner (1975) - http://link.springer.com/article/10.3758/BF03203972


There was an exhibit demonstrating a similar optical phenomenon in the Exploratorium (http://www.exploratorium.edu/) in San Francisco:

The viewer watches a series of projected photographs of a streetscape with small differences between each (a pedestrian appears, a taxi disappears), but they don't see these changes because when the image changes, a small flash occurs, which somehow resets the optic system's change-detection system. The viewer can then press a button to suppress the interstitial flashes, and with no flashes, they can easily see the differences as they occur.


M$ research has AWESOME stuff on foveated graphics. They essentially track your eye movement and only render detail in the specific spot on the display you are looking out. Everything outside your eye's focus area is still rendered but a a much lower quality. Saves immense amounts of resources, imagine your gpu only having to render 20% of the screen at a time!

http://research.microsoft.com/apps/pubs/default.aspx?id=1766...


As an analog to lossy compression - lossy rendering?


Was it really just replacing each other letter with x? I can notice the string of x's with my peripheral vision. Also, the words must have been quite a bit farther apart than they are on my screen.


I was paraphrasing this from memory. The dependency is quite complex, it depends on the amount of degrees the text occupies in your field of vision, but also on the mode of reading you're in (very focused reading seems to narrow the fixation span). Also, please note that your eye makes constant micro-saccades (https://en.wikipedia.org/wiki/Microsaccade), and that the authors of the paper were probably compensating for these saccades as well, more or less.

Quote from the actual paper

    This experiment has provided data which begin to
    answer the question about the size of the perceptual
    span during a fixation in reading. Although it may be
    possible in tasks other than reading for subjects to
    identify letters. word shapes, and word-length
    patterns some distance into the peripheral areas, in
    fluent reading this information appears to be obtained
    and used from a relatively narrow region. Thus. a
    theory of fluent reading need not suppose that
    word-shape and specific letter information is obtained
    from a region occupied by more than about three or
    four words during a fixation. and perhaps not that
    large if the span is not symmetrical around the point
    of central vision, a question not tested in the present
    study. Thus. it does not appear to be true that entire
    sentences are seen during a fixation; in fact. for most
    fixations. not even a complete phrase will lie within
    this area.
So the "window" or "span" that needed to be un-masked was about 3-4 words wide (interestingly, this did not necessarily depend fully on the length of the words)


Yes but you're also able to focus on the words made up entirely of 'x'. I don't think you can really judge the effect because 1) it's not the same effect here and 2) you already know about it now.


Your video reminded me of this selective attention test: https://www.youtube.com/watch?v=vJG698U2Mvo. This Wikipedia article also seems to be related to the phenomenon and talks about the video too: https://en.wikipedia.org/wiki/Inattentional_blindness.


Yes, this is always named in the same breath as my GP post, the gorilla video is a classic. The video I posted much the same thing. It makes you realise how much of what we perceive is really what we think we perceive


This is what happens when I try to read text in a dream.


Basically the inverse being applied? Sentence fully displayed, with the word that's focused on masked...


This could be very useful for optimizing 3d videogames :) Basicaly rendering only part of screen each frame.


He says in the article that "Apple did a good job with their monitor", but what he fails to realize is that the panel in that monitor is made by Samsung, and is used in numerous other monitors, such as a bunch of Dell Ultrasharp monitors. These are the exact same monitor, except with a different frame and stand, and are available for a fraction of the cost.


Actually, unless it's changed with the 5k screen, the iMac displays are made by LG. Samsung makes the display for the iPad.


While both Apple and Dell displays use LG panels, some even using the same base model (LM270WQ1), there are actually at least 10 variants of the LM270WQ1 alone which vary considerably with respect to backlight technology, brightness, color reproduction, etc.


(Hi there, I'm Clay, the creator of the picture frame mentioned here.)

You're correct that the same panel is in the Dell display, but I'm pretty sure it's made by LG, not Samsung.

I purchased one of the Dell displays for testing purposes, and unfortunately, for reasons I don't fully understand, the color on the Dell display is quite a bit worse, with non-uniform color over the display from left to right, and also variable gamma at different brightness levels. The iMac also supports finer-grained control of brightness, which you need for luminance matching. All of this makes the iMac and its display a better choice for this application.


Yea but the integrated quiet computer was also important for his project. Mounting a separate computer would have added a lot of complication.


He does not say this. In fact, at the end of the article, he writes that having the whole iMac is fairly impractical and having a smaller Android system on a chip would be the better solution.


Which monitor would you recommend that uses the same display as the 5k iMac? Apparently Samsung made one half a year ago that costs as much as the iMac itself.


Dell's 5k monitor has been as low as $1440 at amazon:

http://camelcamelcamel.com/Dell-Monitor-UP2715K-27-Inch-LED-...

Not that I would recommend it over the iMac for this project.


Assuming some alternate monitor is equally good with colors and contrast/backlight. A simple raspberry pi should be able to do the rest of the computation needed for this.


His analysis assumes you're going to get up close to the picture to really examine the detail, up one foot away. The resolution requirements are based on that assumption. I think that's unrealistic, and a much cheaper monitor would be just as effective in creating the illusion.


"Visitors to Microsoft chairman Gates' new mansion in Medina will find themselves in an electronic Disneyland where they will be able to "call up" a favorite painting or photograph on combination TV-computer screens.

"Right now, only a billionaire could afford to try $80,000 monitors and $150,000 computer-storage devices (the house could have several) that may be needed to make this all work."

from "Bill Gates: 50,000 Computer Paintings On Call" (1991)

http://community.seattletimes.nwsource.com/archive/?date=199...


I remember reading about the various technological features Gates put in when he built his home. I would be really interested to read/see (video would be awesome) a followup piece about whether many of these things either required regular upgrading or else have simply become antiquated. I would guess that his family kept upgrading the home, but details would be interesting.


And now you only need to be a millionaire.


Nice job! If you have a camera that tracks the viewer, you can also make the image it displays 3D as seen in this video from 2007: https://youtu.be/Jd3-eiid-Uw?t=2m46s (back then it used a Wiimote to track the head position).


up close the lack of parallax would give it away, but I think it would work ok if the viewer is sufficiently far away.


Do you think a camera would be precise enough? This looks pretty awesome.


I had a quick go at this a few months back using Apple's CIDetector[1], and even at 'high' accuracy that didn't seem precise enough — it was distractingly jiggly.

[1] [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh, CIDetectorTracking: @YES}]


I'm really surprised there's no photos of the build process. It's hard to tell if this is actually what is being described, or just a blog post around a photo of a framed image.


Build photos would have been really cool. Also, some detail on the software would be nice. I'm assuming it's running on OSX, but I still have lots of questions (what language, how can you change the picture, how does he manage it remotely, etc).


First of all, yes the frame really exists. :)

If I get around to pulling together build photos (which by the way aren't that exciting), I'll post them here. For now, just imagine a big hole in my wall (I had to reframe it), a photodiode sticking out of a hole on the right side of the picture frame, and an iMac.

The software is just a simple Objective-C / Cocoa app running on Mac OS X. I use a Kensington presenter clicker to advance the images to change them remotely. I drop new images into a Google Drive folder from another Mac, which the picture frame automatically picks up and puts into the image rotation. And if I want to make major changes, I just use a wireless keyboard and mouse to work on the frame like I would a normal iMac. It is very unsophisticated.


There's a video further down demonstrating it adjusting to different lighting conditions; you can see a slight lag.


My first thought when I started reading was "wow what a waste of time and money to display a stupid picture" but by the time I was near the end of the article, my attitude had turned 180 degrees. This is amazing attention to detail and craftsmanship in the traditional sense. I can only admire the process - really impressed.


It brings to mind the "moving paintings" in the ride wait queues for "Harry Potter and the Forbidden Journey" (a dark ride with a moving car on an articulated arm). I saw those things up close and they had a wonderful canvas texture laid over the screens. It really seemed magical -- though I admit I'm one to give myself over to the magic of theme parks when I can -- and it made me wonder if this effect could be reproduced by using a matte silicone mold of actual canvas or something like that.

As an aside, there is a lot of amazing technology in play in that ride; I read an unauthorized treatment of how the ride works and I was blown away at the systems/mechanical orchestration necessary to pull it off. It's no wonder it breaks down all the time.


Thisi s cool, but to really reproduce art, you have to avoid RGB-based displays. I've always assumed the reason wide-gamut/CIE Lab*-influenced displays are uncommon is that RGB is "good enough" for most purposes outside of image-editing/printing. But they can only show a fraction of the colors humans can see. IIRC, the standard sRGB space used in most monitors can only show a third of the human-visible spectrum.


Yes! What I really want for a future version of this is a 12-bit color depth, HDR, 8K OLED monitor, so you get true blacks, great resolution, and very accurate color reproduction across a very wide dynamic range. Maybe in a few years.


Yes, that seems like a huge limitation. Though the 5k iMac is wide gamut, so quite a bit better than sRGB. Are there even (consumer?) displays that have a significantly wider gamut?

http://blog.conradchavez.com/2015/10/26/a-look-at-the-p3-col...


Kudos to Apple for making the new iMacs wide-gamut. But I don't know that it's too common in the consumer space yet, mostly because once you start using multiple color spaces, you run into the problem of images created in one color space being shown in another. If the image data isn't tagged correctly, things will look over/under-saturated on different monitors. (E.g., is rgb(255, 0, 0) the reddest red in sRGB or the reddest red in P3/Adobe RGB?)

I suspect that when monitor manufacturers tire of competing on pixel density, brightness, 3D, and size, they'll eventually get around to competing on gamut. Maybe Apple will lead the way. A lot of photo/video cameras already capture a larger gamut, so monitors are the last piece of the puzzle.


Unfortunately as the gamut gets larger, so does the visual difference between adjacent colors. You need more than 8 bits per channel from end to end if you're going to use anything wider than sRGB.

Any RGB based color space will be triangular shaped; if it's to be physically realized in a display, all 3 vertices must lie within the CIE gamut. Since the CIE gamut isn't perfectly triangular you will always be missing some colors outside of the triangle no matter what you do. You could do much better by adding a 4th primary to create a trapezoid, which would more completely fill the space. As the entire industry is RGB based this would be completely impractical, but it would be fun to see as a demo.


Is the CIE gamut "it"? In other words, are all colors perceivable by a typical human represented by it? I briefly scanned the Wikipedia article but wasn't any wiser.


Yes, the CIE 1931 color space is generally accepted as the limit of human perception. I don't know how much variability there is from person to person, since it's based on averages. There are a small number of Tetrachromats that can distinguish a greater variety of colors, but I believe those colors are all still within the gamut.


Nicely done, though I would have loved to have seen some build pictures. There are a number of commercial players in this market (for example https://frm.fm/) which combine hardware and marketplace, and the geek in me would love to see these succeed.

I work in e-commerce selling art reproductions and have thought a lot about this. I believe as soon as you make the art ephemeral or changeable, its value (to you, rather than monetary) is diminished and it simply becomes a means of displaying pixels.

I jumped at the chance to digitise my music collection, I just can't imagine doing the same for visual art.


Interesting, I'd never seen these before. I can't seem to find any info on pricing or availability on Framed's site, though; am I missing something?


Electric Objects is another one trying to get people to convert from physical to digital art on the wall. I believe them and Framed are priced at around $499... https://www.electricobjects.com/


I would expect there are people who lovingly care for their collection of vinyl records and find the desire to digitize a music collection equally unimaginable.


Granted, though I think the difference is even more pronounced when the primary purpose of the physical object is to be appreciated visually. Not to detract from the visual appeal of record sleeves, but the primary goal is to house the album.

I would also argue that sound can be reproduced digitally in a way that closely resembles the analog equivalent, but the same is not true of displaying art on screens vs on actual paper/canvas (despite the impressive efforts in the linked post).


Some issues:

Paint on any medium is a slightly sculptural 3D. You can look at a masterpiece from four hundred years ago and see the brushstrokes. The reflection from the relief map changes as you move your head.

Fine art inkjet prints are more subtly textured, because museum-grade paper is never perfectly flat.

A lot of art is huge - many feet by many feet. The giant displays needed to reproduce it don't exist yet.

But the biggest problem with electronic frames is working out how to power them without trailing an obvious power lead to the floor. You can solve this problem by plastering the display into the wall, or by building a false wall, or by ignoring it. None of those are ideal.

I'd love to see a practical solution, but I'm not sure one is possible with current technology - although it may get closer with the new film displays that are appearing.

I like the light sensing here though. That's a clever solution.


A cool effect would be to have the painting actually looking at you using face recognition.

Freaky.


An excellent idea. Perhaps combined with the look away detection mentioned in the article, such that the painting only tracks you when you aren't looking directly at it. Reminiscent of many a horror movie.


Unless I'm mistaken, the way for a painting to actually look at you is unfortunately just to have the pupils dead ahead. Perspective on the painting then does the rest. Otherwise this would never even have been a cliche to riff on.


So... when will you start a Kickstarter campaign?


It's too bad this isn't on github. I've always wanted to make something similar, but using computational art instead of paintings. My dream is to make a thing that slowly adapts to the room its in, but doesn't seem like a computer hanging on a wall.

Does anyone know where I can learn more about the lighting techniques described here?


I'm a big fan of flux, but brightness adjustment is another story.

On my iPad, I quickly turned it off. The auto-brightness never hit the spot I wanted, and switched too strongly and too late. Reading on the bus, the brightness basically goes berserk at every shadow and never settles. At home it would be more manageable, but unfortunately desktop monitors don't support brightness adjustments as far as I know -- really wish it was standard through HDMI.

Having to put an entire iMac computer on the wall to have access to brightness adjustments (at least that's my understanding of the solution) makes it a lot less practical. But kudos to the author for addressing that essential part of the problem.


Keep in mind that the cheapest 5K iMac is in the same price range as standalone 5K monitors. So you might as well use the iMac unless you can get a 5K monitor at a pretty steep discount.


A color hi-res e-ink display would be a much better fit for the project: it requires power only to change the image and doesn't need light sensors. Too bad e-ink isn't being developed anymore.


http://www.eink.com/ appear to be doing development and a year ago releasead a colour eink display. Alas the resolution and 4096 colour range is somewhat adrift from the requirements of such a use as in photo realism. Indeed the market that will boost this area will be supermarket labeling upon shelfs and the like. Even recently a few bus stops in London utilising eink for displaying a bus timetible http://www.gizmodo.co.uk/2015/12/tfls-trial-of-clever-eink-l...

But totaly agree it is aperfect fit for such display technology, alas the technology is still a long way off being suitable for this niche usage. Albeit the whole refresh aspect and power usage make it not ideal for current technology, at least the aspect of screen burn-in and ghosting are less considerations from the old CRT days (though they could do a better colour range technology wise).


> Too bad e-ink isn't being developed anymore.

Has development seriously stalled? I'd thought Amazon was developing it for the non-LCD Kindles. (At least I thought the Voyager represented a relatively recent step forward for eInk.)


I was having a conversation regarding these things just a few days ago. I always loved the idea of a digital frame and have had many over the years, but no devices really nailed it and backlit was always the wrong way to go. Refresh time is not an issue at all as you would refresh on inactivity if you really don't want people to see the sausage made. Feeding data to these things is trivial.


I continue impatiently waiting for a large format true color electronic paper display at a quasi-reasonable price. I have had a different project idea for such a thing for several years now, but the wait continues. I imagine such a display would be excellent for the use described in this article. No need to mess about with matching light levels (although doing that was super cool!), battery would only need charged every few weeks or months, etc. There are actually a couple products that seem to be what I want, but for reasons I do not understand the image displayed can not be changed. I have no idea why someone would buy electronic paper signage whose image has to be provided to the manufacturer and which can not be changed later. I would expect a simple printed sign would be superior for any use that would be put to.


Wow this is incredibly cool! I love the shy feature especially, to screw with the viewer :P

One question I have, though, is that since the aspect ratio of the screen is fixed, does that mean that all the photos have to be rescaled? That is kinda inelegant, I wonder if there's a way to fix it.


Yes, I have to crop the images to the aspect ratio of the display. I considered doing a simulated digital mat which could change given the aspect ratio of the work being shown, but it wasted so much of the display that it was not worth it.

I end up not having to crop much, because the aspect ratio I chose lines up pretty well with those of many good paintings and photographs.


I think the mat around the photo is also rendered, so you can have whatever aspect you want - although a portrait image might look a bit lost in the middle of it.

Next step - mount the frame on a motorised bearing so it can rotate 90 degrees depending on the image aspect!


I don't think the mat is rendered. In the video with changing light conditions, you can see how the picture lags slightly, while the mat reacts immediately.


The mat is also used to make the aspect ratio less extreme, so it can't be rendered.


Constrast is the lagging factor in good computer displays. These are called High Dynamic Range (HDR) displays. The claim is the eye can see a contrast factor of about a million at any one time, and a billion switching between sunny daytime and darkest night. An end-to-end camera-display system would need 20 bit amplitudes represent such a dynamic range. Many current systems are just 8 or 12 bits.

SIGGRAPH often exhibits high end experimental displays. 4K resolution is close to what the eye can see. Four or six basis color systems improve color capture. But I generally just see an improvement in nature scenes. The huge win is HDR. A good HDR display is almost indistinguishable from looking through a window.


https://en.wikipedia.org/wiki/Human_eye#Dynamic_range

>> 4K resolution is close to what the eye can see.

Depends on the distance to the display (whose resolution is independent on where the viewer is).


Awesome! I really love the usage of the photodiode, as well as the camera, both are pretty thoughtful finishing touches that elevate this to more than "a cool digital frame".

How much does the display cost, if I were to make a frame like that too?


The cheapest 5k iMac is $1799, other comparable displays are roughly in the same ballpark. I imagine the other items (frame, sensors) are trivial compared to that expense.

Replacement 5k panels for the iMac (LM270WQ1) are merely $800 on AliExpress, but you'd have to figure out how to output to them.


Hogwarts paintings!


Almost! Still need to network them.


This is amazing! I had the same idea a few years ago but never got around to it. I was planning to just use some regular LCD screens and give them some nice frames. Probably wouldn't have looked that great, though.

It's going to have to wait though, I'm living in an apartment now and can't really cut huge holes in our walls. Even if everything could fit inside the frame, it would be pretty ugly if you couldn't hide the power cable.


If I pull an LCD panel out of an old laptop, how do I connect it to something small like a raspberry pi or beaglebone?

My understanding was that many lcd panels required custom drivers/firmware that was often embedded in the laptop/display board.


search for the part number on ebay and you'll probably find a driver board with vga or hdmi input, or a board that you can buy and ask the seller to program it for a specific display. Usually less than $30, but you'll need power, a case, etc..


Anti-"Shy mode" with all such frames connected over intranet would be really cool!


Fantastic read, and a great idea. Very inspiring, thank you for posting it.


I use a printer.


you tree killer :P




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: