Coloring is arbitrary and conveniently chosen in the example to look like visible wavelength response as if the patient was dissected. It does not work for more complicated anatomy and even in this case needs very likely post processing.
I wish we could have a peek inside the patient as if we had opened it up but this is not yet that.
False colour images aren't meant to mislead. They're meant to be more useful to understand what you're looking at. "True colour" has no meaning, really.
> visible spectrum has been shifted into x-ray spectrum.
No, x-rays are way more energetic than visible light (MUCH shorter wavelenths..). Red shift, as the name implies, refers to shifting of wavelengths to the red/infrared side of the spectrum, not towards the blue/ultraviolet/x-ray/gamma ray side. The most distant, oldest thing we can see is in the microwave wavelengths, and that's the remnants of the big bang.
The real issue is with dust, and the inverse square law (that the intensity of radiation follows).
Some objects are obscured by dust, and so the only way to see them is by looking at wavelengths that are able to penetrate the dust.
The further an object is, the less intense the radiation is reaching us, so we have to stare at it for a very long time in order to collect enough photons to form good images. This is why views of Andromeda through a telescope with your eyeballs do not look like this visible light image of Andromeda: https://apod.nasa.gov/apod/ap991114.html
Instead it looks more like this under the absolute best viewing conditions on Earth with a quality telescope (nominal is a blurred version of this): http://www.deepskywatch.com/images/articles/see-in-telescope...
Surgery rarely looks like that. We are messy creatures. Our parts all overlap and fold among each other. It never looks as open as in the scan, short of dissecting the patient well beyond what is ever needed for any actual procedure.
The example image certainly looks cool, but it is too 2d. It doesn't have the 3d structure visible in traditional x-rays. An expert can look at an old photographic x-ray and see the layers of tissue and bone atop each other. It is a rather good 2d visualization of a complex 3d shape. Giving tissues more solid color take away from the translucence and, imho as a non-doctor, removes much of the information.
This tech also does work quite nicely on complex anatomy.
By my understanding, it's not possible to determine an objections interaction characteristics with a wavelength of X with a wavelength of Y. (Unless you were able to create some sort of beat frequency tuned to mimic X by sending in Y+delta? Just thinking off the top of my head). It seems fraudulent for the company to say that it's capturing color.
>these type of sensors are able to put different bands to different buckets.
Would the best summarization be, photon counting sensors allow for the capture of x-ray imagine with higher dimensionality?
CT machines have this calibrated as standard, DICOM images have the corresponding fields for the values. Any DICOM viewer will do this (i.e. 3D Slicer, ParaView, OsiriX etc.)
As you say, there is nothing new in this, just PR. Medipix also has been around for a while, it's basically a solid state detector with an integrated USB readout on the silicon. Neither has been invented at CERN, its COTS.
Yes, this is a PR release for tech using the Medipix3. If I understand correctly (which I very probably don't since this HN thread is the first time I've heard of this technology), this is analogous to faster/more accurate AI software being developed with some chip company's latest massively parallel processor - representative of real, but nowhere near groundbreaking, technological advancement.
What's stopping us? Are you saying that there's no way to tell different kinds of tissues apart (e.g. not even with an MRI)? I imagine if we did, we could also add "color" (via post-processing) and enable a "dissection". Or are you saying it has something to do specifically with color?
We see in a range of wavelengths 0.4--0.7 um. X-rays have no intersection with our visible range. Therefore the best you can do is define some function that maps x-rays into our color range, but things look very different in x-ray land than they do in color land. For starters, most things would be transparent if you could see x-rays (hence x-ray imaging!).
After much searching I found the energy resolution here , stated as <2 keV. For comparison, a modern silicon drift diode fluorescence detector will have an energy resolution of ~130 eV.
I've worked on an X-ray machine during my second internship as an embedded engineer (fantastic internship!).
My task was to optimize the way images and colors are presented to the viewer, given the measured X-ray data.
What I learned there is that whatever is presented to you on the screen is _completely arbitrary_.
When you have can measure a higher range of X-ray frequencies, you have more room to play with the colors. What colors you actually assign to the data is _absolutely arbitrary_, and technology similar to this has existed for a long time.
Patients can even download the lite version and review their own images.
Until now we only measure the general brightness of the xrays, and osirix translates (even marginal) differences in brightness into different colors. This makes features of the same brightness easier to spot for a human.
This new technique actually measures different wavelengths of the xrays on our detectors and since we call the different wavelengths in visual light "color" it's a good analogy to call them the same for xrays. It can differentiate different colors of xrays.
This is really promising, nuances in chemical composition may lead to differences in opacity for different wavelengths.
This causes radiology companies so much pain. How do you give the images to them when people don’t have CD drives anymore? How do you get them to understand that they aren’t jpgs? How do you get them to a stage where they can open the images (they usually don’t have a Mac)? And the final pain is the last call. “What’s the black thing in the back of the white bit by the edge? Is it cancer?”
Now i have an easy to understand example to make my skeptical friends understand how CERN’s research benefits us in myriad ways.
https://archive.org/details/paulotlet or https://www.youtube.com/watch?v=KLX2OGw31Oo
There are a bunch of ways to convert that to an RGB display, and I wouldn't be surprised if there were multiple rendering options you can flip between until you find the one that gives you the visual discrimination you're looking for.
It's still ok, it's not supposed to be that of a regular procedure.
It's probably inevitable that a 'spectral' scanner requires greater exposure than clinical CT scanners since the intensity of a spectral X-ray source apparently varies during the scan. This implies slower scans and more X-ray exposure than a conventional CT.
Apparently that's why MARS' current product is intended for preclinical (non-human) use only.
- For a given procedure, will the radiation dose be higher than its corresponding 'black and white' X-Ray?
- If the radiation dose is higher, does the color add additional information or does it increase the diagnostic capabilities? Traditional X-rays (CT) already distinguish between soft-tissue, bone, fat, cartilage... so will the color distinguish structures within soft-tissue?
You can get that with a Dicom station and 5 minutes, or for free using Osirix at home and trial/error if you are not used to CTs.
Ask for the CD next time you get a scan, or download one of the many examples, and play with Osirix.
I’m looking up the things you mentioned and they’re interesting, though. I’d love to see imaging and home-computed assessments become a bigger part of the “quantified self” movement.
You can do the same thing for X-rays, or any 3-channel data source.
(Fun fact: because of this, there are colors that your brain can understand but aren't wavelengths of light. Magenta is seen when your blue and red sensors are activated... but there is no single wavelength of light that can do this. Meanwhile, your eye can't tell the difference between monochromatic yellow light, or red and green light being received simultaneously. This is why RGB monitors work, and why you can't represent every color you can see on a computer screen.)