Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Volume rendering 3D data in Three.js and GLSL (github.com/suboptimaleng)
108 points by SuboptimalEng 9 months ago | hide | past | favorite | 37 comments
I've been going through Cem Yuksel's "Introduction to Computer Graphics" course and thought that writing a volume renderer would be a good way to test my knowledge. It is a common technique used to render 3D medical data. Works by ray marching a specific step size, reading a 3D texture (e.g. MRI data), and calculating opacity values.

Code should be easy to get started with for anyone familiar with the JS ecosystem.

Questions for the HN community: I spent 20-25% of the entire time just setting up the project and fighting issues with the JavaScript ecosystem. This experience has made me consider learning Cpp, Metal, and XCode. Has anyone made the transition from WebGL/TS to Cpp or done it the other way around? What was your experience with it? And what about debugging? That's a big issue with WebGL.

As for now, I'm thinking about picking up WebGPU next because it has an up-and-coming debugger made by Brendan Duncan[0] that looks promising.

Edit: Anyone know why MRI data is sent via CDs and not the web? I started working on this project specifically because some people close to me got MRI scans and received CDs of their results. I know that some MRI data can be huge so downloading over the web doesn't make sense, but anything under 64MB should be sent over the web, no? For reference, I believe most MRI data can be under 250MB[1].

[0] https://github.com/brendan-duncan/webgpu_inspector

[1] https://www.researchgate.net/figure/Typical-image-dimensions...




There are many more desktop tools for debugging GPU code, largely because of game development. Xcode as an IDE is pretty out of date. Visual studio has much better debugging tools for GPUs. See nvidia's tools. As a long time GPU programmer, I'm particularly interested in testing out rust.

The medical industry moves slowly. Medical data is often covered by HIPAA, which is why it is often not available easily on the web. Using CDs is just an artifact of their slow movement through technology.


The Cancer Imaging Archive has anonymized, publicly accessible data. https://www.cancerimagingarchive.net/


Oh interesting, I always read (on r/GraphicsProgramming) that XCode has some of the best debugging tools out there. That's the main reason it's been on my mind. I just need better debugging tools on Mac!

Fair point about the health tech industry.


Speaking of which, why haven't I heard of anyone using game engines as a platform for medical imaging and UX?


Unity and Unreal have industrial users, including medical.

You are really asking, “why does medical imaging not have budgets for software?” It faces all the same monetization challenges as all business to business software, that tends to, unfortunately, disfavor operationally cheap and performant solutions.


A cool property of volume rendering is that it requires much simpler calculations than other 3D rendering problems. It's simple enough that you can use Homomorphic Encryption to calculate the rendering of _encrypted_ medical data!

Here is my summary of it[1] and the original paper[2] I found this in

[1] https://azeemba.com/posts/homomorphic-encryption-with-images...

[2] https://arxiv.org/abs/2009.02122


You don't get the "run everywhere" ability that threejs has. It appears you "fighting the js ecosystem" comes from lack of experience and familiarity

I recently tried Metal a bit. Some features are only available for visionOS (the headset) and not iOS or macOS and vice versa.


Yea, that's what I love about the web. Though I do consider myself to be pretty experienced on the JS side as I've worked with it for a while. I think the main reason I ran into these issues is because I've been trying to use the Web for something that it's not fully built for (graphics programming). And also, it's been a while since I started setting up a new project.

Here are some questions that came up:

- Do I use Next.js or Vite.js or CRA (is that still used these days)

  - With Vite.js do I pick React or go with TypeScript template?
- Okay now I need Tailwind (because quite honestly, I'm terrible at CSS which I admit is my own doing)

- But what if I don't want to write GLSL files in raw text, and use GLSL file extension?

  - Oh now you want to import GLSL files directly, well you can't do that

  - Oh, I needed to install a Vite plugin but then TS throws errors saying it can't read files that end with .glsl

  - Wait is my TS server running? No, I thought this always starts up in VS Code

  - Okay so I can't even find the button to restart TS server, so now I need to... okay figured that out

  - That still didn't fix it the TS errors

  - After some more research, ah so you can import text files like this a.glsl?raw extension


>I've been trying to use the Web for something that it's not fully built for

sounds exactly like every other dev pushing JS libraries. that's not necessarily a bad thing, but it does get messy on the bleeding edge of things


Nice Demo! Definitely agree with the lack of familiarity / tangential aspect comment. IMO your best bet is starting off with vanilla HTML + JS and then adding tooling as required / only if necessary.

E.g. Vite is handy for bootstrapping and hot reloading (don't really need it for build anymore given remote imports, etc). Do you really need typescript? If it saves you time in the long-term, sure add it in (it requires a build step). If you are building a big UI then OK - add a CSS / JS framework. But best bet is always to keep complexity down to a minimum. Last thing you want to be doing is debugging / working around peculiarities of other people's code.

I tend to demo in a single HTML file (you can even embed your shaders in the HTLML!), which can then be easily shared / hosted.

On the demo. Not sure if you referenced it, but I remember being wowed by a similar (official) demo back when Three was first rolling out WebGL2 support. The first commit I could find was 2018, but I'm sure it was actually earlier than that - might even have been a pre-release demo (I'm certain cross browser support wasn't there when I tried it).

https://threejs.org/examples/webgl2_materials_texture3d.html

They used the "NRRD" format, which I haven't heard of (but sounds pretty close to your 1x1x1 raw files ... x-array style thing). Did you look at this approach? Have things moved on since?

Edit: maybe the real tutorial here is around pre-processing of common volume data formats for use with e.g. three / webgl2


I don’t really get the “I’ve been trying to use the web for something it isn’t meant to do (graphics programming)” sentiment or the question.

What you have produced is wonderful, but has nothing to do with the web. That’s like comparing apples and oranges.

The web was built for images. All the glsl you have is rendering to that image. Cpp or whatever, your problem domain is pixel shader. A program, run on a gpu, for a single pixel and ray marching through a dataset on also on a gpu. It’s a hardware problem why debugging is so hard. Unfortunately, this is the reality of graphics programming in general.

Vite, React, Typescript, etc etc are all faff exterior to what you are doing.

If you want debugging tools and a good graphics programming experience, and a simple view with some interactivity something like Unitt, Godot, TouchDesigner or if you want lower level, OpenFrameworks, Processing, etc. would be leaner.


Similar to my experience coming from JS into WebGL. I always use a framework (currently Vite + React + TS). It helps keep code organized as your project grows (and it will). I don't think there's any overhead as the GPU processing is independent and React does all its magic on the CPU as far as I'm aware. Typescript errors sometimes come from VSCode and sometimes from the node scripts and I can never seem to get my VSCode TS settings the same as my cli settings so errors will appear in the IDE that my build tools ignore or the build tools will error when the IDE says everything is fine but usually I just use some ignore flag because at the end of the day it's just tangential to what I'm trying to do


I need to do what you did and sit down and make a perfect Vite + React + TS boilerplate that I can use for all my web graphics experiments.

Also, your project, Shademap, is really cool btw! I remember seeing it just around the time I was getting into Three.js and graphics programming - well before the time I fully understood what a shader was.


Does it support DICOM as input files?

I was provided MRI DICOM files a month ago via the radiology office's used SaaS platform, Sectra PatientPortal.

After the MRI session I was given a login code. Back home, I could use that code on the Sectra PatientPortal.

It provides a very basic DICOM image web viewer (contrast/dynamic range modification and slice scrolling, four windows), albeit 2D only, and download of the DICOM files as well as PNG exports.

I am looking for a web-based or Linux-based FLOSS DICOM viewer, but couldn't find any professional program so far.


My app does not support DICOM files as input. Just Uint8 256x256x256 raw files that are scaled 1x1x1. Maybe if I had the chance to work on it full-time I'd have the time to add those features, but it's just a side project for now.

Have you looked into Slicer3D[0] which is a multi-platform desktop app or Open Health Image Foundations dicom viewer[1] which is web-based? Perhaps one of these will help.

[0] https://www.slicer.org/

[1] https://github.com/OHIF/Viewers


Brings back fun memories, wrote something similar in college with qt and opengl: https://github.com/fargiolas/qvrc. It had a super buggy but somewhat working live transfer function editor, blinn phong and toon shading, arcball camera. Really nice project for GLSL learning.


Oh man, this is awesome. I have been periodically checking the internet for something like this that I can hand off to an intern and get some visualization of data at work. I am definitely gonna be using this.

Also I love your videos. The internet is a smaller place than it seems.


Thank you! Hope you (or the intern) can pick up right where I left off.


Three.js and React, yet not R3F - curious why?

I really like how you organized your project. As someone who's developed with Three.js for years both imperatively and in a highly-declarative way using R3F I'm always interested to see different approaches to architecture/organization.


Good question! If I knew how to do it, I'd probably write this in just TypeScript. Turns out I don't know how to set that up. So I instead start with Vite and throw React in there to handle rerenders and simple state management. I think R3F is interesting, but I never used it because it makes graphics programming too front-endy.

The most important reason for my code structure is that I just like writing GLSL shaders! It's annoying that I need to set up Vite, TypeScript, Tailwind, React, etc. all so that I can write some shaders. I know I can write shaders in ShaderToy, but then I won't be able to upload custom 3D texture files or add simple user controls.


> I know I can write shaders in ShaderToy, but then I won't be able to upload custom 3D texture files or add simple user controls.

Maybe check out https://cables.gl


> The most important reason for my code structure is that I just like writing GLSL shaders! It's annoying that I need to set up Vite, TypeScript, Tailwind, React, etc. all so that I can write some shaders

That's what the frontend hype canal wants you to think!

Your frontend JS could just `fetch` the GLSL files from the backend instead of trying to compile them into your build process.

There are tradeoffs to this of course, but dealing with the complexity death star of JS tooling can be opted out of.


This is cool - for Metal, the debugging tools are pretty useful in Xcode - you can access buffers and read values directly.

Related - a while I made a process to convert DICOM / MRI files to voxels for printing at the native resolution of 3D printers [0]. It means you can preserve all the fine detail of the scans, rather than converting to a mesh (segmentation).

On the CD question - it's probably because there is little incentive to build a secure / cross platform solution for patients to access their scans. The CD model is very outdated, but does work, and there is no need for HIPAA compliance even though a CD isn't very secure.

[0] https://www.lukehale.com/voxelprinting


That's one of the main benefits of Metal and it's made me think twice about it. I struggled quite a bit trying to debug my shaders. But Brendan Duncan's WebGPU Inspector caught my eye so I'll check that out for now.

Oh wow, your project looks really cool too! I'll check it out.

Interesting. I did not know that CDs did not need to get HIPAA compliance. I suppose web apps would certainly need that. I wonder what will happen once Gen Alpha starts needing to get MRI scans.


Most volume renderers lack a good transfer function editor. When analyzing volumes, especially explorative analysis, the most effective tool is to dial in colors and opacities for certain value ranges, in order to find structures.

The volume rendering engine I have been working on uses a histogram for the value distribution, and on top of it, one can draw lines that indicate the opacity. Additionally, one can set colors to the control points, which are then linearly interpolated for the given ranges.


I’ve received MRI data by both CD and online. I think it just comes down to what systems the hospital/consultant supports and how recently their systems were set up of modernised.


I also heard that sometimes you need to physically carry CDs to doctor's office. I suppose getting pictures of your MRI scans via Web is a good middle ground.

It just really irked me that people got CDs instead of being fully digital. The last time I used a CD (besides my PS4) was 10+ years ago. I don't even have a CD player anymore. I'm sure Gen Alpha will look at CDs the way I look at floppy disks.


I bought a cheap USB CD/DVD drive just to view my girlfriend's medical images. This was only two years ago and they wouldn't/couldn't just put them on a USB stick for me.


This looks great - will try it out tonight.

At first I thought you‘ve been using ImGUI as your interface looks similar.

Maybe that‘s a good starting point for your CPP career.


Cool thank you! I'm using a library called Dat.gui which was easy to download and use thanks to NPM. Maybe I've been too hard on the JS ecosystem heh.

I've heard good things about ImGUI so I'll have to check that out if I ever go down the Cpp route.


I would recommend you to upload everything using Netlify since it's free and you just have to click 2 buttons. In this way, you may have a demo online, which is the overwhelming top advantage of the web compared to anything else around programming.


Dan Kaminsky once showed me a volume renderer he was noodling with, but instead of MRI data, he had loaded frames from video footage of an explosion. I remember it having a 4D feel to it.


Check out vtk.js (and VTK proper) as a fairly mature implementation of this. I haven’t kept up with the development but it was pretty far ahead when it came out years ago.


Author of the WebGL volume rendering tutorial [0] you mentioned in the readme here, great work!

Working in WebGL/JS is nice since you can deploy it everywhere, but it can be really hard for graphics programming as you've found because there are very few tools for doing real GPU/graphics debugging for WebGL. The only one I know of is [1], and I've had limited success with it.

WebGPU is a great next step, it provides a modern GPU API (so if you want to learn Metal, DX12, Vulkan, they're more familiar), and modern GPU functionality like storage buffers and compute shaders, not to mention lower overhead and better performance. The WebGPU inspector [2] also looks to provide a GPU profiler/debugger for web that aims to be on par with native options. I just tried it out on a small project I have and it looks really useful. Another benefit of WebGPU is that it maps more clearly to Metal/DX12/Vulkan, so you can use native tools to profile it through Chrome [3].

I think it would be worth learning C++ and a native graphics API, you'll get access to the much more powerful graphics debugging & profiling features provided by native tools (PIX, RenderDoc, Nvidia Nsight, Xcode, etc.) and functionality beyond what even WebGPU exposes.

Personally, I have come "full circle": I started with C++ and OpenGL, then DX12/Vulkan/Metal, then started doing more WebGL/WebGPU and JS/TS to "run everywhere", and now I'm back writing C++ but using WebGL/WebGPU and compiling to WebAssembly to still run everywhere (and native for tools).

With WebGPU, you could program in C++ (or Rust) and compile to both native (for access to debuggers and tools), and Wasm (for wide deployment on the web). This is one of the aspects of WebGPU that is most exciting to me. There's a great tutorial on developing WebGPU w/ C++ [4], and a one on using it from JS/TS [5].

[0] https://www.willusher.io/webgl/2019/01/13/volume-rendering-w...

[1] https://spector.babylonjs.com/

[2] https://github.com/brendan-duncan/webgpu_inspector

[3] https://toji.dev/webgpu-profiling/pix

[4] https://eliemichel.github.io/LearnWebGPU/

[5] https://webgpufundamentals.org/


Wow! First of all, thank you for your amazing blog posts and tutorials! I wouldn't have been able to make it this far without them. Seriously, I was stuck for so long until a random Google search linked me to that WebGL ray-casting article you wrote. (I'd pin your comment if I could.)

The funny thing is that I was getting more confident about using JS + WebGL/WebGPU ecosystem for graphics programming after having read your posts. Very interesting to hear that you've come full circle back to Cpp + WebGL/WebGPU + WebAssembly. I'll look more closely to assess options as I head down this journey. Thank you for your tips and advice!

Edit: Perhaps you'd find my "What is WebGPU" video on YouTube interesting. I'd love to get it fact-checked by someone who's been doing WebGl/WebGPU way longer than most people! I only got into this field ~2 years ago.


Sure I'd be happy to check it out, my email's in my profile (or Github/website).

There are some tradeoffs w/ WebAssembly as well (not sharing the same memory as JS/TS is the biggest one) and debugging can be a bit tough as well though now there's a good VSCode plugin for it [0]. Another part of the reason I also moved back to C++ -> Wasm was for the performance improvement from Wasm vs. JS/TS, but the cross compilation to native/web was the main motivator.

[0] https://marketplace.visualstudio.com/items?itemName=ms-vscod...


It's interesting to hear that Cpp is faster even though there is an overhead of moving data from WASM <-> JS/TS. I'm not yet ready to "take the leap" to learn Cpp + Metal + XCode + WASM because those are some big hurdles to jump through (especially just in my free time), but you do raise some good points.

I'm certain you could turn this knowledge into a blog post and help many more engineers who are silently struggling through this path. Self-studying graphics programming is tough!

It should pop up first on YouTube if you search "What is WebGPU Suboptimal Engineer", but I'll link it here[0] in case anyone else wants to watch it. (No need for you to actually fact-check it. I didn't mean to put random work on your plate on a Sunday haha.)

[0] https://www.youtube.com/watch?v=oIur9NATg-I




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: