Hacker News new | comments | show | ask | jobs | submit login
First ever color X-ray on a human (yahoo.com)
212 points by Vaslo 73 days ago | hide | past | web | favorite | 82 comments



Photon counting sensors have been available for quite a while. This is unfortunately not new. It’s not ’color’ it’s multi channel imaging. X-ray tubes produce wide spectrum of x-rays, these type of sensors are able to put different bands to different buckets.

Coloring is arbitrary and conveniently chosen in the example to look like visible wavelength response as if the patient was dissected. It does not work for more complicated anatomy and even in this case needs very likely post processing.

I wish we could have a peek inside the patient as if we had opened it up but this is not yet that.


It seems like this is more akin to 'false color'.[1]

[1] https://en.wikipedia.org/wiki/False_color


Which similar to images we see of space where they have altered detectors' responses from different wavelengths of light to be images in the visible spectrum. So we produce beautiful images, but they aren't accurate to what space would look like to the naked eye.


Probably more useful than the naked eye in many cases. The colors our eyes can see mostly evolved to detect ripe fruit, not diseased tissue.


Why did fruit evolve to change colors when ripe?


In many cases so animals would eat them, digest the seeds, and poop them out somewhere else.


Why did animals evolve an attraction to ripe-colored fruit?


On the contrary, finding edible food is certainly important, but avoiding poisonous/diseased food is even more important.


If we can detect ripe, surely we can infer not ripe.


There's probably an app for that.


Yep. All those fantastic color pictures of the Orion nebula are just that. (Personally I think there should be a requirement to post, alongside them, true color photos.)


Why? Many times the true colour images aren't even available. How would you represent an infrared or x-ray picture properly, in your view?

False colour images aren't meant to mislead. They're meant to be more useful to understand what you're looking at. "True colour" has no meaning, really.


I'd say because without a modicum of knowledge you'd take it for granted. Compare to "images" of dinosaurs where you'll often read "this is what it should look like" vs "we know squat about the color of their skin". And for space pictures, which are 99% touted as "pictures" and not reproductive works, it's even worse. Hell, as I'm writing this I'm tempted to double check what I know about color-correctness of any space photo, and my point was already that I'm in the second stage of not knowing - that there could be something I don't know [0]

[0] https://en.wikipedia.org/wiki/Four_stages_of_competence


Why aren't space photos real? For deep space photos, where we have to wait for individual photons, maybe they're false color, but many photos are taken by putting colored filters in front of the lens and then combining the exposures, so they are actually true color.


I'm not saying they're fake - I am just saying people believe what they see. So a specific or general disclaimer of "the colors might not be accurate" would be cool, if it's the case for a specific photo.


Tell that to the color-blind.


That's what I understood at first, but what seems to be the case is that light from deep space undergoes "red shift", and by the time it reaches us, visible spectrum has been shifted into x-ray spectrum. The "false color" is just a correction back to almost the original visible spectrum. The degree of red shift is how we determine distance.


Red shift only comes into play for distant galaxies. Nothing in our galaxy, or nearby galaxies, is going to be red shifted in any significant way.

> visible spectrum has been shifted into x-ray spectrum.

No, x-rays are way more energetic than visible light (MUCH shorter wavelenths..). Red shift, as the name implies, refers to shifting of wavelengths to the red/infrared side of the spectrum, not towards the blue/ultraviolet/x-ray/gamma ray side. The most distant, oldest thing we can see is in the microwave wavelengths, and that's the remnants of the big bang.

The real issue is with dust, and the inverse square law (that the intensity of radiation follows).

Some objects are obscured by dust, and so the only way to see them is by looking at wavelengths that are able to penetrate the dust.

The further an object is, the less intense the radiation is reaching us, so we have to stare at it for a very long time in order to collect enough photons to form good images. This is why views of Andromeda through a telescope with your eyeballs do not look like this visible light image of Andromeda: https://apod.nasa.gov/apod/ap991114.html

Instead it looks more like this under the absolute best viewing conditions on Earth with a quality telescope (nominal is a blurred version of this): http://www.deepskywatch.com/images/articles/see-in-telescope...


hey thanks for clearing that up.


Exactly. Researchers have been doing tomography with energy-dispersive detectors for decades, especially at synchrotrons. They often represent that data in 3D with false color indicating material properties. If there is something particularly novel about this work, it's not clear from the article, which reads like a rehash of a University PR piece.


TV cameras originally produced color by taking black-and-white video through 3 different filters, transmitting these as a bundle, and outputting these to the electron scanner. The colors didn't get mixed until after they hit the TV screen, as the photons were traveling to our eyes.


That's still how most imaging sensors work. Few, if any, actually parse the photon energy from within the imaging pixel itself.


>> I wish we could have a peek inside the patient as if we had opened it up but this is not yet that.

Surgery rarely looks like that. We are messy creatures. Our parts all overlap and fold among each other. It never looks as open as in the scan, short of dissecting the patient well beyond what is ever needed for any actual procedure.

The example image certainly looks cool, but it is too 2d. It doesn't have the 3d structure visible in traditional x-rays. An expert can look at an old photographic x-ray and see the layers of tissue and bone atop each other. It is a rather good 2d visualization of a complex 3d shape. Giving tissues more solid color take away from the translucence and, imho as a non-doctor, removes much of the information.


I agree, PCD's have been available for some time, but this is the first time a 3D reconstruction of a living human has been produced. The image colors do actually represent the energy spectrum of the transmitted photons. However, if we wish to see this information then we must map the X-ray color to the visible spectrum or there would nothing to visualize. In this case using a tissue color scheme that we understand is just convenient.

This tech also does work quite nicely on complex anatomy.


>Coloring is arbitrary

By my understanding, it's not possible to determine an objections interaction characteristics with a wavelength of X with a wavelength of Y. (Unless you were able to create some sort of beat frequency tuned to mimic X by sending in Y+delta? Just thinking off the top of my head). It seems fraudulent for the company to say that it's capturing color.

>these type of sensors are able to put different bands to different buckets.

Would the best summarization be, photon counting sensors allow for the capture of x-ray imagine with higher dimensionality?


Pretty much. Normal x-ray sensors are like camera sensors and produce an estimate of total energy of the photons / pixel. Photon counting sensors are able to estimate how many photons of certain energy range hit the sensor. So instead of value ’1023’ you might get ’63’ of < 20kv, ’500’ of 20-50kv and ’50’ of > 50kv. So poor mans spectroscopy. Naive way to produce ’color images’ would be to put those buckets to red green and blue channel and now different materials along the beam path might yield equal values with normal x-ray sensors but slightly different colors with photon counting sensors. edit: clarified last sentence


The way "color images" are produced is to map the Hounsfield scale to the visible colour spectrum: i.e. blood red, bone white, steel gray etc.

https://en.wikipedia.org/wiki/Hounsfield_units#Value_in_part...

CT machines have this calibrated as standard, DICOM images have the corresponding fields for the values. Any DICOM viewer will do this (i.e. 3D Slicer, ParaView, OsiriX etc.)

As you say, there is nothing new in this, just PR. Medipix also has been around for a while, it's basically a solid state detector with an integrated USB readout on the silicon. Neither has been invented at CERN, its COTS.


To be more precise, CERN took part in Medipix3's development.

https://medipix.web.cern.ch/medipix3-collaboration-members

Yes, this is a PR release for tech using the Medipix3. If I understand correctly (which I very probably don't since this HN thread is the first time I've heard of this technology), this is analogous to faster/more accurate AI software being developed with some chip company's latest massively parallel processor - representative of real, but nowhere near groundbreaking, technological advancement.


In this case it’s more than HU value LUT though. Photon counting allows per energy bin attenuation to capture more information. It’s cool and promising but costly and not new per se.


Agreed. Photon counting has been around in (medical) imaging for ages.


> I wish we could have a peek inside the patient as if we had opened it up but this is not yet that.

What's stopping us? Are you saying that there's no way to tell different kinds of tissues apart (e.g. not even with an MRI)? I imagine if we did, we could also add "color" (via post-processing) and enable a "dissection". Or are you saying it has something to do specifically with color?


I think the point was that the x-ray 'colors' do not have 1:1 mapping with the colors that our eyes see. So the color is useful to discern different tissues and bones and stuff, but the colors won't look like the colors of those things in real life.

We see in a range of wavelengths 0.4--0.7 um. X-rays have no intersection with our visible range. Therefore the best you can do is define some function that maps x-rays into our color range, but things look very different in x-ray land than they do in color land. For starters, most things would be transparent if you could see x-rays (hence x-ray imaging!).

http://www.columbia.edu/~vjd1/electromag_spectrum.htm


Maybe a good way of explaining it would be like this: if someone magically had a blue spine, this technique would still show their spine as being white (unless the doctor already knew their spine was blue and adjusted the parameters accordingly)?


My point is, showing spine as white on X-ray (well, that's probably easy, but generalizing this to other tissues and their "normal"/"expected" color probably isn't) would be tremendously helpful in diagnosis... "peeking inside patients without cutting them up" - and I'm wondering, what's stopping us from doing just that?


There are slightly more details on the product page [0]. It looks like they have a small bore version right now, they're working on making a human sized one. There is one test image that shows part of a wrist, a watch, and part of a hand.

[0] https://www.marsbioimaging.com/mars/overview/


Oh yeah, and if you scroll down, there's a lot more detail. Sounds like you can customize the color mapping somewhat in their software -- there's mention of adjustable binning and customizable energy thresholds.


Another article at http://www.eurekamagazine.co.uk/design-engineering-news/firs... It sounds like the chips are detecting the energy of each xray, which is equivalent to a wavelength, and that data is then used to synthesize a 3D image.


The 3D nature of the output is still generated using tomography (CT). The colour comes from the x-ray energy which is detected using two thresholds giving effectively two channels of data.


https://en.wikipedia.org/wiki/Medipix

After much searching I found the energy resolution here [1], stated as <2 keV. For comparison, a modern silicon drift diode fluorescence detector will have an energy resolution of ~130 eV.

[1] http://dpnc.unige.ch/seminaire/talks/campbell.pdf

Also: http://iopscience.iop.org/article/10.1088/1748-0221/8/02/C02...


This feels like a real black swan event to me. I guess I just assumed that black-and-white x-rays were as good as it gets, and that there wasn't any room for improvement. I've never even thought about the idea of a color x-ray. It's very exciting to see something that could change the entire field of radiology.


Oh nonononono please no, no black swan event. The highest voted comment in this thread is very right.

I've worked on an X-ray machine during my second internship as an embedded engineer (fantastic internship!). My task was to optimize the way images and colors are presented to the viewer, given the measured X-ray data.

What I learned there is that whatever is presented to you on the screen is _completely arbitrary_.

When you have can measure a higher range of X-ray frequencies, you have more room to play with the colors. What colors you actually assign to the data is _absolutely arbitrary_, and technology similar to this has existed for a long time.


Oh I see, yes it's a bit underwhelming if that's the case. I don't even know if "true color" x-rays would be very useful, but I was thinking that being able to view the actual color of the tissue might be useful, and could convey some information that you can't get from density.


This is an article with a few paragraphs that is sensationalizing what is available while showing a single photo. Let's not overdo the hype.


OsiriX (and others) have been rendering radiology imaging in color and 3D for quite a while.

https://www.osirix-viewer.com/osirix/osirix-md/

https://www.osirix-viewer.com/resources/dicom-image-library/

Patients can even download the lite version and review their own images.


While the title is disingenuous because it’s easy to misunderstand it’s actually very close to the truth. Xrays are just a very energetic form of light or electromagnetic radiation, above the spectrum of visible light.

Until now we only measure the general brightness of the xrays, and osirix translates (even marginal) differences in brightness into different colors. This makes features of the same brightness easier to spot for a human.

This new technique actually measures different wavelengths of the xrays on our detectors and since we call the different wavelengths in visual light "color" it's a good analogy to call them the same for xrays. It can differentiate different colors of xrays.

This is really promising, nuances in chemical composition may lead to differences in opacity for different wavelengths.


> Patients can even download the lite version and review their own images.

This causes radiology companies so much pain. How do you give the images to them when people don’t have CD drives anymore? How do you get them to understand that they aren’t jpgs? How do you get them to a stage where they can open the images (they usually don’t have a Mac)? And the final pain is the last call. “What’s the black thing in the back of the white bit by the edge? Is it cancer?”


Yes I wouldn't be so impressed with a colorized image if the source was just a regular x-ray. But my impression was that they were capturing multiple frequencies, so it was a lot of extra information to the image. I don't know anything about radiology but I'm always excited to see some progress.


“The new device, based on the traditional black-and-white X-ray, incorporates particle-tracking technology developed for CERN's Large Hadron Collider.”

Now i have an easy to understand example to make my skeptical friends understand how CERN’s research benefits us in myriad ways.


Fundamental science research always has applications decades in the future. It's pretty rare that practical applications are found "now".


Famously Hertz claimed there would be no application of radio waves.


There is a distinction to be made on the results of the fundamental research itself, and then the sideproducts of the engineering that enables that research. From the sounds of it, this Xray thingy belongs more in the latter category.


Your problem will be, that it is very easy to argue for somebody knowledgeable in the field, that this can be done completely without CERN’s research. Actually, similar stuff has already been done.


There's also the whole World Wide Web thing...


Are you referring to the stripped down SGML, RPC or hyperlinks? Or the combination? Because all three have been around much before. In many flavours and mixes. The WWW variant’s adoption had more to do with politics than scientific/technical merit.

https://www.youtube.com/watch?v=hSyfZkVgasI

https://archive.org/details/paulotlet or https://www.youtube.com/watch?v=KLX2OGw31Oo


The GDPR consent screen for this is scary. They want to share your data with around 200 different "partners".


And a year ago they wouldn't even tell you that.


Article is short on details. Do they take pictures with X-rays at three different wavelengths, and then render the three resulting images as individual color channels?


> The colours represent different energy levels of the X-ray photons as recorded by the detector

http://www.eurekamagazine.co.uk/design-engineering-news/firs...

There are a bunch of ways to convert that to an RGB display, and I wouldn't be surprised if there were multiple rendering options you can flip between until you find the one that gives you the visual discrimination you're looking for.


My particle physics is bit fuzzy, but isn't the energy level of photon directly related to wavelength? So you could do a simple frequency shift (+compress) to visible light spectrum and map them to colors that way? Of course that is the naive approach, and might not be useful from medical perspective.


Yeah, you could do a linear mapping, but you might want to stretch some regions and compress others. But overall it just sounds like a wavelength transform, yes.


And get 3 times the does of radiation? Seems impractical, but possibly true.


A chest x-ray will give you 0.1 millisieverts. That's the 60th of the dose limit used in scientific facilities like cern. And 200 times less than for radiation workers.

It's still ok, it's not supposed to be that of a regular procedure.


It actually looks like it's a CT scan, which is a type of x-ray. A chest CT scan is 7 mSv.


Here is a chart: https://xkcd.com/radiation/


X-ray sources are not "monochromatmic" and produce photons across a wide range:

http://www.ctlab.geo.utexas.edu/wp-content/uploads/2015/06/F...


That's not a yes or no. In normal imaging, you need three times as much white light to capture a good color image, because each portion of the film/sensor is only sensitive to a single color. Does this technology avoid that problem?


I remember discussing this as a possibility with Phil and Anthony in CERN Cafetera in 2007/2008 when they had first licensed some imaging tech from CERN. It is absolutely thrilling to see how far the team have come since then and to see them start to get the wider coverage their achievements deserve. Seeing this on HN just made my day.



How much exposure to radiation is needed for this? Their brochure indicates 20-80 mGy/mSv. But a typical clinical head CT is only 2 mGy. AFAIK, exposures that high would not be acceptable for clinical human use.

It's probably inevitable that a 'spectral' scanner requires greater exposure than clinical CT scanners since the intensity of a spectral X-ray source apparently varies during the scan. This implies slower scans and more X-ray exposure than a conventional CT.

Apparently that's why MARS' current product is intended for preclinical (non-human) use only.


I think regular CT already causes cancer in ~1/2000 uses. So 10x that dose would probably cause cancer in ~1/200 uses? That's pretty scary.


That's assuming dosage has a linear relationship with instances of cancer. Scary regardless, though.


This is very interesting! I can't wait to learn more about it. There are, however, some questions to consider:

- For a given procedure, will the radiation dose be higher than its corresponding 'black and white' X-Ray?

- If the radiation dose is higher, does the color add additional information or does it increase the diagnostic capabilities? Traditional X-rays (CT) already distinguish between soft-tissue, bone, fat, cartilage... so will the color distinguish structures within soft-tissue?


The videos on their press page are stunning. Especially the ankle rotation video.

https://www.marsbioimaging.com/mars/media-pack/

https://drive.google.com/file/d/1JyRdqyU-j5PambGUgfs4Cx5uKf3...


Doesn't look that much better than a thin slice CT with custom colormaps ("CLUTs"), using a regular greyscale for the high density tissue (bone), and a reduced alpha channel for the low density tissue rendered in yellow.

You can get that with a Dicom station and 5 minutes, or for free using Osirix at home and trial/error if you are not used to CTs.

Ask for the CD next time you get a scan, or download one of the many examples, and play with Osirix.


That may be, but an X-ray is a simpler and cheaper procedure. To achieve “feature parity” with something more expensive is an advance.

I’m looking up the things you mentioned and they’re interesting, though. I’d love to see imaging and home-computed assessments become a bigger part of the “quantified self” movement.


What you linked to is a type of CT scan (CT uses X-rays).


Do you need a different detector? Can't you just do multiple exposures at different energies like dexa?


This honestly seems like raw black magic. How is this possible?


I mean, all color is is your brain's perception of 3 input channels. Your eye can detect (roughly) the amount of stimulation from blue light, green light, and red light... your brain interprets this as a color.

You can do the same thing for X-rays, or any 3-channel data source.

(Fun fact: because of this, there are colors that your brain can understand but aren't wavelengths of light. Magenta is seen when your blue and red sensors are activated... but there is no single wavelength of light that can do this. Meanwhile, your eye can't tell the difference between monochromatic yellow light, or red and green light being received simultaneously. This is why RGB monitors work, and why you can't represent every color you can see on a computer screen.)


TL;DR it’s CT with false colors


Every now and then a question pops as to why we have to invest billions in researching particle physics and how can it improve our lives. Well this is how.


Surely much more to come too.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: