I've worked on an X-ray machine during my second internship as an embedded engineer (fantastic internship!).
My task was to optimize the way images and colors are presented to the viewer, given the measured X-ray data.
What I learned there is that whatever is presented to you on the screen is _completely arbitrary_.
When you have can measure a higher range of X-ray frequencies, you have more room to play with the colors. What colors you actually assign to the data is _absolutely arbitrary_, and technology similar to this has existed for a long time.
Patients can even download the lite version and review their own images.
Until now we only measure the general brightness of the xrays, and osirix translates (even marginal) differences in brightness into different colors. This makes features of the same brightness easier to spot for a human.
This new technique actually measures different wavelengths of the xrays on our detectors and since we call the different wavelengths in visual light "color" it's a good analogy to call them the same for xrays. It can differentiate different colors of xrays.
This is really promising, nuances in chemical composition may lead to differences in opacity for different wavelengths.
This causes radiology companies so much pain. How do you give the images to them when people don’t have CD drives anymore? How do you get them to understand that they aren’t jpgs? How do you get them to a stage where they can open the images (they usually don’t have a Mac)? And the final pain is the last call. “What’s the black thing in the back of the white bit by the edge? Is it cancer?”