
Hyperspectral analysis with just an app - sschueller
https://www.fraunhofer.de/en/press/research-news/2017/february/app-reveals-constituents.html
======
kragen
This is very interesting, but the headline is a baldfaced lie.

The headline "hyperspectral analysis with just an app" is unfortunately kind
of bullshit. (And Fraunhofer seems to have realized this, as the current
headline is just "App reveals constituents".) As highd points out, this will
give you nine spectral dimensions instead of the usual three, or eight rather
than two after you drop luminance, which will almost certainly be the first
principal component. From eight spectral dimensions you can maybe get back to
eight wavelength bands.

(And you _do_ get nine spectral dimensions rather than the usual three,
because the spectra of the light emitted from the display are not going to
exactly match the response spectra of the pixel colors in the camera. But
they'll be close, so some of those dimensions will have very little energy, so
they'll be very noisy, and JPEG compression is likely to be fatal. And of
course they vary from one device to another.)

An eight-band image is not a fucking hyperspectral image. It's multispectral,
like Landsat. But that's still enough to do quite a bit of material
discrimination. Probably not even close to enough to detect pesticide residue,
though.

The only way you could get more information this way would be through
nonlinear effects, where illuminating the object with twice the light gives
you a scattered result significantly different from twice the original
scattered result. Those don't contribute significantly to the spectra of
ordinary materials except under extremely intense lighting conditions,
although I've seen two-photon effects do interesting things with glow-in-the-
dark paint and a red laser pointer, and of course there are devices (like
green laser pointers!) that take advantage of the Kerr effect — by using
exotic materials.

~~~
AndrewKemendo
Not to give them too much credit because it doesn't go into this detail, but
it would in theory be possible to use the speaker/mic to utilize ultrasound
[1] and pick up IR as, on certain devices, there is no IR filter on the front
facing camera [2]. So you could combine RGB inputs, with background IR pickup
and ultrasound to get actual hyperspectral data.

[1][https://www.infosecurity-magazine.com/blogs/ultrasonic-
cross...](https://www.infosecurity-magazine.com/blogs/ultrasonic-crossdevice-
tracking/)

[2] [http://kenstechtips.com/index.php/how-to-see-the-
invisible-i...](http://kenstechtips.com/index.php/how-to-see-the-invisible-
infrared-world-using-your-mobile-phone-camera)

~~~
andai
What is ultrasound used for here?

------
nom
The idea itself is great, but the quality of the result obviously depends
highly on the type of hardware - namely the spectra of the display and camera.

An LCD has a totally different spectrum (broad) than an LED display (single
wavelength). Cameras on the other hand always have a broad spectrum, but it
varies depending on the color filters, the response curve of the pixels as
well as the post processing (e.g. the demosaicing algorithm). Regarding
response curve of the display: that's also something they have to calibrate
for and it's not only a hardware parameter because it can be modified in
software, for example by the iOS Night Shift feature.

Making this work with a single piece of hardware is hard enough and I wonder
if they'll target the android market at all. My guess is they're going to
release it only for iPhones because it's so homogeneous.

Also, the press release photo made me giggle, the front camera isn't even
facing the object they are scanning :D

Edit: Saying LEDs have a single wavelength is probably not correct, but they
have a very narrow band depending on the technology used.

~~~
smallnamespace
Couldn't you in theory calibrate this for any feasible device using a known
light source and a large enough set of color swatches with known response
curves? Inferring device parameters should just be solving some system of
equations given enough constraints, and if you overconstrain the system you
also get an estimate of how much noise is in your parameters.

In this case, you need to simultaneously estimate the LED wavelengths as well
as the response curve of the camera.

Not saying it'd necessarily be worthwhile to do, but if that actually worked,
all you'd need to do is send a light source + a color swatch booklet through
the mail to a potential user if the device isn't in the database yet.

~~~
dave_sullivan
> using a known light source and a large enough set of color swatches with
> known response curves? Inferring device parameters should just be solving
> some system of equations given enough constraints, and if you overconstrain
> the system you also get an estimate of how much noise is in your parameters.

If that's true, and those constraints aren't terribly well understood, it
would make for a good machine learning problem. Fwiw.

~~~
smallnamespace
Sorry I wasn't being clear -- the RGB camera response to each patch of a
swatch should give (up to) 3 different constraints under normal lighting, plus
another 3 by cycling through LED colors, so in theory you only need ceil((k+3)
/ 6) colors to estimate a k-variable camera response curve (the other 3
estimate your LED wavelengths).

------
notnot
So they're using the screen consisting of three monochromatic LED types as a
full-spectrum light source? I don't think that works... You still just get
three points on the spectrum, same as the Bayer filter on the camera.

I suppose if the three screen LED wavelengths were significantly different
from the three camera filter wavelengths then you could:

Illuminate with screen Red to get Rscreen.

Illuminate with screen Green to get Gscreen.

Illuminate with screen Blue to get Bscreen.

Use ambient full-spectrum light to get Rfilter, Gfilter, and Bfilter.

Then you'd have 6 points on the spectrum.

~~~
highd
I think you're correct, barring significant nonlinearity in the bayer mask or
object. Technically you can get 9 linearly independent points - each
combination of light channels on with each combination of bayer mask channels.
Ideally only 3 of those will be nonzero, but if the bayer mask is imperfect
you'll see some illumination on adjacent channels. Environmental background is
subtracted out from all since it's unknown, so that doesn't give another
point.

There's also no way you're measuring pesticide residue with that - I doubt
that would even be possible with a high-end visible hyperspectral camera.
Maybe with a raman spectrometer.

I've designed a couple versions of cell phone camera-based spectrometers and
spectral imagers, so I'm relatively familiar with the design principles.

~~~
nom
Can you give us some insight on the whole pesticide thing? How is it possible
to detect it with a spectrometer in general?

Are you sure that it is impossible to detect it, even if the app is
'calibrated' to a certain object, e.g. an apple? I think if you limit the
search space it could be possible!?

~~~
kortex
It's impossible. Most chemicals of interest are pretty boring in the visible
spectrum. I'd say >95% of pure substances I've worked with - everything from
pesticides to pharmaceuticals - are some variant on "white to off-white solid"
or "clear to amber liquid." White/clear indicates that all photons visible to
us interact with the materially equally. You get tans, yellows, and browns
largely from high-frequency (deep blue/purple part of the spectrum) being
absorbed by assorted chemical bonds.

Spectroscopy is predominantly done with UV (200-280 nm most common) and IR,
which are regions where photonic interaction is dominated by electronic and
vibration/rotational transitions, respectively.

Visible light absorption is typically caused by highly conjugated bonds and
metal-coordination complexes. In terms of day-to-day, this is almost
exclusively dyes (synthetic and natural). Dyes also tend to be really potent
absorbers - you only need minuscule amounts of them to create very vivid
colors. So a purely visible-light-based app would at best be able to give you
a handle of what sort of dyes are in something. It won't tell you if it has
pesticides (let alone traces!) or HFCS or nutrients or what-have-you.

tl;dr - no, it's not remotely possible to even detect pesticides with visible
light.

------
leecarraher
i actually built a little chameleon throwie concept years ago based on this
principal. it used two RGB led as the output light and stepped through the
colors, then used the unused LED to measure the light(LEDs have photosensitive
resistance). Then once it figure out it's color(dumb mixing of resistances) it
would output the guessed color using PWM on both RGB LEDs of the thing it was
on. In short though, it didn't work very well. Also like others have said,
LED's have very narrow bandwidths (by design, as it uses less power). if this
weren't the case people probably wouldnt be paying around $2000 grand for a
light bulb ([http://www.shop.spectrecology.com/USB-ISS-UV-VIS-
illuminate-...](http://www.shop.spectrecology.com/USB-ISS-UV-VIS-illuminate-
cuvette-holder-USB-ISS-UV-
VIS-2.htm?_vsrefdom=adwords&gclid=CjwKEAiAxKrFBRDm25f60OegtwwSJABgEC-
ZQK35fiSEKRNFRRhfDNrp_8BnWeioZd-NNAzJ6Q0DHxoCs8Lw_wcB))

~~~
msds
I tried to do this exact thing years ago too! Except I only used one RGB led,
and thus could never sense the red channel, because the bandgap of green and
blue LED dies is too big to really get any signal from red light. Isn't it
annoying when your project is up against quantum mechanics?

I also build a swarm of throwies that used a single LED to synchronize their
blinking. That project was much cooler, and I wish I had better
documentation...

~~~
jonathankoren
I am interested in this project.

~~~
msds
Unfortunately, the lack of documentation thing strikes again... It was pretty
simple electronically - an ATTiny25, a coin cell, and an LED wired across two
IO pins, for light sensing using LED's junction capacitance as a photocurrent
integrator
([http://www.merl.com/publications/docs/TR2003-35.pdf](http://www.merl.com/publications/docs/TR2003-35.pdf)).
The trick was in the control algorithm - I got things synchronizing reasonably
well, but LED ensemble was prone to all sorts of weird chaotic behavior.

I suppose I should recreate it, on an even larger scale. I just need to figure
out some better way to power it...

------
teilo
Were it not for the source, I would have called this a hoax.

------
chillingeffect
but but but... how many dimensions do they expect to extrapolate from three
available by illuminating from the screen?

I'm anticipating someone snapping an image and an app saying, "That's either a
fresh organic grape, a tractor tire or a leg of lamb."

Will you have to tell it: "This is a Kale leaf" and let it evaluate the signal
levels relative to other Kale leaves?

~~~
Etheryte
As far as I can tell, their research has nothing to do with directly
_identifying_ objects, only identifying _properties_ of objects.

------
krapht
Hmm, how accurate is it, though? How do you calibrate such a system when the
ambient illumination could be anything?

~~~
teilo
If I understand the article, that problem is alleviated by illuminating the
object with only a single color of light at a time in rapid succession, using
the screen of the phone. Presumably, the camera would filter out ambient light
from the result by sampling that first.

It is certainly not going to be precise, but it's quite an achievement if they
can may it work within the limits of the device.

------
jaydub
Just to confirm, is this a smart-phone based spectrometer? (Why don't they use
that term?)

~~~
RandomOpinion
> _Just to confirm, is this a smart-phone based spectrometer? (Why don 't they
> use that term?)_

Spectrometers generally produce a single spectrum from a light source.

A hyperspectral imager produces a spectrum for each pixel in the image. See
[https://en.wikipedia.org/wiki/Hyperspectral_imaging](https://en.wikipedia.org/wiki/Hyperspectral_imaging)

~~~
anfractuosity
What I was really confused though, as this doesn't use a grating etc. is how
it obtains a spectra.

Like with a grating I can see how you can obtain a spectral pattern for many
wavelengths, but with a bayer array, surely you can only distinguish R, G, B.

Would I be right though in thinking, if there's no stray light and only the
light from the phone screen shining on the apple, you can cycle through many
wavelengths of light, enabling you to measure the reflected light to determine
the spectra.

One thing I was wondering so pesticides can presumably also be detected using
visible light spectroscopy then?

~~~
nom
_> Like with a grating I can see how you can obtain a spectral pattern for
many wavelengths, but with a bayer array, surely you can only distinguish R,
G, B._

They inversed the principle: Rather than controlling the bandpass of the
sensor (which a grating does passively, over the spatial domain) they are
controlling the spectrum of the light source. You can produce a varying light
spectrum by setting R/G/B on the display, and record the light reflected by
the object with your R/G/B camera pixels, which also have a certain frequency
response.

I have a feeling you can use this to recreate a coarse approximation of the
full spectrum and then use it to infer some of the object parameters, like
pesticides on fruit.

------
angry_octet
High tech spectral analysis, with an apparent goal of enabling death dealing
anti-GMO pseudo science. Great work Fraunhofer, I'm expecting an impressive
Scientology e-meter app next.

