I've been going through Cem Yuksel's "Introduction to Computer Graphics" course and thought that writing a volume renderer would be a good way to test my knowledge. It is a common technique used to render 3D medical data. Works by ray marching a specific step size, reading a 3D texture (e.g. MRI data), and calculating opacity values.
Code should be easy to get started with for anyone familiar with the JS ecosystem.
Questions for the HN community: I spent 20-25% of the entire time just setting up the project and fighting issues with the JavaScript ecosystem. This experience has made me consider learning Cpp, Metal, and XCode. Has anyone made the transition from WebGL/TS to Cpp or done it the other way around? What was your experience with it? And what about debugging? That's a big issue with WebGL.
As for now, I'm thinking about picking up WebGPU next because it has an up-and-coming debugger made by Brendan Duncan[0] that looks promising.
Edit: Anyone know why MRI data is sent via CDs and not the web? I started working on this project specifically because some people close to me got MRI scans and received CDs of their results. I know that some MRI data can be huge so downloading over the web doesn't make sense, but anything under 64MB should be sent over the web, no? For reference, I believe most MRI data can be under 250MB[1].
[0] https://github.com/brendan-duncan/webgpu_inspector
[1] https://www.researchgate.net/figure/Typical-image-dimensions...
The medical industry moves slowly. Medical data is often covered by HIPAA, which is why it is often not available easily on the web. Using CDs is just an artifact of their slow movement through technology.