
Lab Notebook: Single Pixel Camera - mmastrac
http://www.gperco.com/2014/10/single-pixel-camera.html
======
Bud
This vividly illustrates the evolutionary argument about how even a very
primitive eye is a huge survival advantage.

~~~
unoti
Yes. Pretty much any kind of sensor, your brain can figure out how to make use
of it. One big example I can think of is humans learning to use echolocation,
similar to what bats do:
[http://en.wikipedia.org/wiki/Human_echolocation](http://en.wikipedia.org/wiki/Human_echolocation)

Another interesting one is this bracelet which vibrates depending on which
direction the wearer is facing.
[http://sensebridge.net/projects/northpaw/](http://sensebridge.net/projects/northpaw/)
Research showed that people developed a natural feel for the sense over time.

I bet there are other examples, and this may become an important area of
human-technology evolution in the future!

~~~
fudged71
Check out BrainPort and Orpyx

------
ohazi
The digital camera on the Viking Mars lander didn't use a 2d sensor either...
it's amazing what some people can do with engineering constraints!
[http://en.wikipedia.org/wiki/Viking_program#Camera.2FImaging...](http://en.wikipedia.org/wiki/Viking_program#Camera.2FImaging_System)

------
VLM
Interesting introduction to the Radon transform. Before it went that
direction, I was feeling certain it would involve a long tube with the sensor
at the end of and an accelerometer to kind of "paint" the scene by hand or
possibly use servos to scan it. That is a fairly stereotypical old deep space
probe way of doing things.

~~~
gperco
(I'm the author) That's actually how this project started, it was basically a
pinhole camera mounted on a pan/tilt mount.

~~~
tonyarkles
I've never used this in any practical way, but would any of the techniques
from Compressed Sensing help at all?
[http://en.wikipedia.org/wiki/Compressed_sensing](http://en.wikipedia.org/wiki/Compressed_sensing)

~~~
daturkel
I'm not the author, but it seems like you're certainly right. Here's an
example from the wiki page about how a single-pixel camera utilized notions of
compressed sensing:
[http://en.wikipedia.org/wiki/Compressed_sensing#Photography](http://en.wikipedia.org/wiki/Compressed_sensing#Photography)

I guess applying compressed sensing techniques would mean making assumptions
about your image which allow you to fill in some of the blanks created by the
incomplete angle and position coverage (i.e. the fact that the system is
underdetermined). If you could somehow assume a certain redundancy, you could
probably make guesses about what goes in the spots that didn't get well-
covered.

~~~
lp251
I doubt that compressed sensing will be very effective for this problem. Many
of the guarantees for compressive signal recovery require incoherence between
measurements- meaning each measurement is sufficiently different from the
others. In this case, each measurement is highly correlated with the others.

The Rice single pixel camera (discussed on the wiki page) effectively
multiplies the image by a random mask before it is sensed by the photodiode.
This is how they control the incoherence property.

~~~
greendestiny
You might technically correct in the limited sense that this isn't a
'compressive' sensing - but it's an absolutely classical example of sparse
reconstruction. You can resample that radon transformed image randomly and
reconstruct it as a sparse fourier image. There is matlab code out there if
you google around a bit.

~~~
thwest
Compressive Sensing using the Radon transform associated with radar has
extensive literature. Not sure about the OPs imager. The optimization
techniques are applicable to most sparse signal reconstruction transforms.
Formally you would look to show the Restricted Isometery Property.
[http://users.ece.gatech.edu/~justin/ECE-8823a-Spring-2011/Re...](http://users.ece.gatech.edu/~justin/ECE-8823a-Spring-2011/Resources_files/RandomConvolution-
final-Aug09.pdf)

~~~
greendestiny
It's a dense sensing of the radon transform - hence its not compressive. But
you can resample that lots of ways and satisfy the RIP. Reconstruction of
images from sparse radon transforms is one of the early examples that helped
shape the field.

------
krasin
Having 50 GHz Wi-Fi arriving, the same technique could be applied to visualize
the WiFi router signals: an antenna is the sensor, and a metal plate will
serve as an arm.

------
huu
Incredibly inventive and well executed. Seeing your writeup go from this wacky
hand-waving theory to a built camera to actual results is awfully rewarding.

~~~
mkesper
A good first step to any potentially crazy idea is to simulate it on a
computer. If it doesn't work there, either your simulation is crap or the idea
is bad. If it does work, either your simulation is crap or the idea might work
in the real world.

Cool writing style, too!

------
iamwil
I thought it was going to talk about implementing Compressed Sensing
(Compressed Imaging). If you haven't heard about it, you should look it up.
It's pretty neat.

~~~
skierscott
I have spent the last year making a camera that implements exactly a single
pixel camera that relies on compressed sensing.

The motivation behind this project is that a professor I am working with had
snow melting on his roof. He looked into thermal cameras but found them
prohibitively expensive at $4k-$40k. So, we decided to build a cheap camera
that only used a single thermal sensor and take the picture as fast as
possible.

This camera relies on the fact that many real world images are of the same
color (e.g., the sky is mostly blue). It looks for the areas that contain the
most detail, the edges, through something called the wavelet transform.

~~~
rasz_pl
Trouble with single pixel thermal imaging is inertia - thermopiles have
thermal mass, you cant take 1000 readings per second, you cant even take 100
readings per second. So while 1 pixel camera in visible light spectrum can
work thanks to hi speed acquisition, thermal one will be beyond useless.

not to mention the wave of extremely cheap thermal imagers we got this year
-Flir one, seek thermal, all in ~$200 price range.

~~~
skierscott
I'm inclined to believe the FLIR One does not use "true" infrared. There's a
portion of the visible spectrum that is nearly IR and it's possible to rely on
some IR reflectivity phenomena. I looked at some other FLIR thermal cameras
and found them to be much more expensive; thousands of dollars.

However, there's no doubt a small mobile platform is very usable. The primary
goal behind this work was not to make a product: it was to provide a proof of
concept. We were interested in the image processing and showing that some
heavy lifting could be done on a small mobile platform (the Raspberry Pi).

You are correct about the acquisition rate. However, the acquisition rate was
more limited by the motors to move the sensor into place.

~~~
rasz_pl
Flir one is a proper uncooled microbolometer, Flir finally decided to
innovate, use modern manufacturing processes/methods and build something cheap
to make.

Btw their $4000 E8 thermal camera has VERY SAME components inside (from
320x240 bolometer down to firmware) as $950 E4, only difference is digitally
signed config file (that used to be hackable)

Your platform might of been limited by motor speed, but once you reach 10-20
reads per second you will hit a brick wall of thermopile inertia and that will
be the end.

------
angersock
Somewhat similar to the early work on televisions:

[http://en.wikipedia.org/wiki/Mechanical_television](http://en.wikipedia.org/wiki/Mechanical_television)

One of our undergrad elec courses had a lab where you built one of these. It's
kind of a trip.

------
zdw
From the images of the kit, it doesn't look like there's a lens hood or
anything preventing stray light in at odd angles from hitting the sensor and
washing out the image with the brightest object in the scene.

I wonder if adding a small black tube or similar optics to the sensor would
reduce light bleed dramatically.

~~~
gperco
The sensitivity of the sensor vs angle was pretty close to cos(angle), so it
was not very sensitive to light coming in from extreme angles. But the whole
process relies on light from everywhere in the scene hitting the sensor all at
once, except for the light being blocked by the arm.

~~~
hirsin
Recognized the results of FBP from working on PET scanners and got far too
excited. Very cool stuff!

I saw in the article that you used the mean to correct for brightness and
clouds. Was scaling to correct for overlap not needed because you used the
full sweep across the frame?

------
ChuckMcM
That looks like a lot of fun, I'm going to have to try it. I was thinking he
would do it the other way (move a hole around) but I can see this has its
benefits as well.

------
thisjepisje
This reminds me of a type of eye that occurs in plankton; it consists of a row
of photosensitive cells that moves left and right rapidly.

------
TheLoneWolfling
Here's an entirely different approach:

Take an old LCD screen (small). Take the back out of it. Put a (grayscale)
single-pixel sensor in behind it.

Then repeatedly display random hash on the screen and take a measurement of
the light intensity. My intuitive guess would be that you'd get the most info
with half the pixels on fully and half of them off fully, randomly chosen, but
I can't say why.

You should be able to get much the same result as this, without any moving
parts. Although you'd have to use an entirely different transformation to get
there - it'd reduce to a (massively) underspecified system of linear equations
to solve, with noise added in to boot. Have fun with that.

------
ObviousScience
I'm curious about a couple details of the delivered picture versus the
reference: how long did the single picture camera take to expose and how much
vibration was introduced by its arm's motion?

I suspect some of the sharpness of the one versus the other is related to
different exposure times and vibration while imaging, so it's probably doing
even better at its job than it seems.

------
columbo
It'd be interesting to see what 12 or 24 frames of an animation look like,
something like this:
[http://upload.wikimedia.org/wikipedia/commons/4/4a/Muybridge...](http://upload.wikimedia.org/wikipedia/commons/4/4a/Muybridge_race_horse_gallop.jpg)

------
standard4317
[http://dsp.rice.edu/research/compressive-sensing/single-
pixe...](http://dsp.rice.edu/research/compressive-sensing/single-pixel-camera)
From 2007. Details on a different and more refined approach that I thought
might be worth comparing.

------
drone
Excellent job, I like the thought process and the outcome. I think it's worth
noting, that the results you're getting are great on their own merits. Like a
hand-made pinhole or slit camera, the break from "reality" could be seen as
the artifacts of the method.

------
analog31
Very nice!

Cameras based on a single pixel, or a single-row array of pixels, have uses at
wavelengths where two-dimensional pixel arrays are impractical.

~~~
Hytosys
Have any specific applications off the top of your head? Sounds interesting!

~~~
skierscott
Take snow melting on your roof, meaning a heat leak and wasted cash. You want
to buy a thermal camera, but they're expensive at $4k to $40k. A single pixel
camera would be fine for you -- you wouldn't mind waiting a couple minutes for
it to move the sensor around.

------
jndsn402
Very cool.

