+ Protocols/traits/interfaces and using that for polymorphishm, think intersectable/primitve types.
+ It typicall involves both understanding heap and stack allocated memory in the language as well as understanding the general memory model to build scene graphs etc
+ Building a small linear algebra library usually touches things like low level operations and performance operations
as well as operator overloading if the language supports it.
+ Writing images to disks via pixel buffers
Primarily though I think raytracers are very fun projects because you can produce nice looking results quickly which I find helps with motivation and passion for the project. I'm pretty pleased with some of my renders already
That's the mega tedious part that's no-fun to debug. It's usually where my raytracer projects end haha
What also helps a lot is that you can use only spheres to create a Cornell box (use very large spheres for the walls). And ray-sphere intersection is 'easy'.
Then the next step is path tracing. This will help you to learn a lot about handling recursion (with or without recursion).
Other areas that you learn:
+ How scope is handled
+ (In dynamic languages) how the conversion between floats and ints work.
+ Multi threading
Path tracing doesn't trace towards light sources for shadow rays and instead just sends several rays in different directions and accumelates the resulting colors of those rays.
(But it seems like every person you ask knows a different meaning for terms like raycasting, path tracing and physical based rendering)
* shoot a ray from the pixel
* when the ray hits an object check how much light that point receives
* bounce the ray according to the hit material properties
* repeat the bounces X times and gather how much light the pixel will receive.
Because most bounes are in a random direction it's best to sample a pixel multiple times. And ofcourse more bounces give a more realistic global illumination result.
Minor nitpick, but it has nothing to do with the fact that it renders polygons. Ray tracing can also render polygons. More precisely, game engines use rasterization which works by projecting triangles onto the screen rather than tracing rays through the screen.
Meanwhile, I don't know how much effort would be required to effectively rasterize a beach with individually modeled grains of sand. https://www.fxguide.com/featured/the-tech-of-pixar-part-1-pi...
"This requires a bit more geometry. Recall from last time that we detect an intersection by constructing a right triangle between the camera origin and the center of the sphere. We can calculate the distance between the center of the sphere and the camera, and the distance between the camera and the right angle of our triangle. From there, we can use Pythagoras’ Theorem to calculate the length of the opposite side of the triangle. If the length is greater than the radius of the sphere, there is no intersection."
The two sides he describes have the camera in common - so the "opposite" side of that triangle is the line from the center of the sphere to the right angle - I don't see how this helps....
Edit: ok I finally get it but I think he should just label some of these lengths on the diagram with letters (a,b,c etc) and then just show how they are related by stating Pythagoras theorem explicitly...
Thanks, I looked at scratchapixel but only found the stuff about how pinhole cameras work.
I'm not sure I would claim that -- with a line drawing routine in hand (a for loop), you can have 3D perspective renderings of objects with a few matrix vector multiplies.
But that is not the only way. You can iterate over a plane and test whether the coordinates are within a triangle. It ends up being very similar to the code you'd have for ray tracing a triangle.