
A reasonably speedy Python ray tracer - AlexeyBrin
http://www.excamera.com/sphinx/article-ray.html
======
IvanK_net
I made one in Javascript a long time ago :)
[http://renderer.ivank.net/](http://renderer.ivank.net/)

After some time you can get this:
[http://renderer.ivank.net/balls.jpg](http://renderer.ivank.net/balls.jpg) :)

Edit: I am glad you like it! I also made this fully-GPU renderer (actually, it
is a game): [http://powerstones.ivank.net/](http://powerstones.ivank.net/)

~~~
FeepingCreature
I also made one in Javascript.
[http://feep.life/~feep/jsfarm/info.html](http://feep.life/~feep/jsfarm/info.html)

It uses a Lisp-based scene description language (with macros!) and WebRTC to
form a P2P network of compute nodes, entirely in the browser, with near-native
performance thanks to dynamic compilation to AsmJS.

It got 0 votes on Hacker News.

I'm not salty.

edit: Source on Github!
[https://github.com/FeepingCreature/jsfarm/](https://github.com/FeepingCreature/jsfarm/)

edit: I reproduced your scene, give it a bit to render.

edit: Wow, you have a lot of neat scenes!

edit: And here you go. rendered at ~2.5 million samples a second, thanks
JumboCargoCable whoever you are! (You can set your nick in the settings menu
accessible via the gear icon in the top left.)
[https://i.imgur.com/UvdBhq1.jpg](https://i.imgur.com/UvdBhq1.jpg) and scene
[http://bit.ly/2yYciCS](http://bit.ly/2yYciCS) though I think I made it too
bright.

edit: Some people appear to have buggy systems that always return black
pixels. :-(

edit: Could whoever is SilkyDoorGame please post their cpu, os and browser?

~~~
dang
If you email hn@ycombinator.com we'll send you a repost invite. I don't want
to do it now because once a particular theme (in this case ray tracers) has
made the front page it's usually not a good idea to post another one too soon.

~~~
FeepingCreature
Awesome, will do! Thanks a bunch.

------
rossant
As the author of the original version, I'm very glad to see such an
improvement !

My IPython Cookbook contains increasingly optimized versions with Cython as an
illustration of how to use this library to accelerate Python code. The fastest
Cython version is 300x faster than pure Python ; a lot of Python/NumPy
overhead is bypassed by reimplementing the logic in, basically, C. The OpenMP
multicore GIL-releasing version is roughly 4x faster than the fastest Cython
version on a quadcore computer. ([https://github.com/ipython-books/cookbook-
code/tree/master/n...](https://github.com/ipython-books/cookbook-
code/tree/master/notebooks/chapter05_hpc))

There is also a GPU reimplementation (in OpenGL/GLSL) in the VisPy examples
([https://github.com/vispy/vispy/blob/master/examples/demo/glo...](https://github.com/vispy/vispy/blob/master/examples/demo/gloo/raytracing.py)),
it is animated and runs in real time.

------
Mauricio_
For anyone without any idea how to do this, Ray Tracing in a Weekend is a good
introduction. It teaches you to do the image in the cover in a pretty short
time.
[https://www.amazon.com/gp/product/B01B5AODD8](https://www.amazon.com/gp/product/B01B5AODD8)

~~~
nikofeyn
yea, that is definitely a cool book. a few months back i started to go through
it doing the implementation in racket but got distracted with other things.

[https://github.com/nikofeyn/ray-tracing-with-
racket](https://github.com/nikofeyn/ray-tracing-with-racket)

the code should run directly in dr. racket without modification, but i didn't
finish the book obviously to get to the cover picture.

the c++ code in the book is pretty straightforward (he leans on practicality),
so it is kind of fun to directly port to a language but then slowly change it
to be idiomatic in the target language. i was learning a lot about racket (my
first project in it) in the short time i was going through the book. i need to
get back to it...

------
berkut
While cool, it should be pointed out that this way of ordering things doesn't
really scale with scene complexity (more objects, more complex triangular
meshes requiring acceleration structures) or image size, as the number of
masks required to determine visibility would become very prohibitive.

One of the great things about raytracing (at least the basics before you get
to more complicated light transport), is how simple the normal recursive
algorithm is for rendering a scene. This method in the article complicates
that greatly with the mask passes, and I guess could be termed a wavefront
renderer.

------
Marazan
I wrote a pure python (No Numpy) ray tracer as a learning exercise. Spoiler:
it was slowwwwwwwwwww.

I converted to NumPy and it was just slow. I then went to array broadcasting
(which required surprisingly few code changes due to NumPy being pretty
awesome) and it became fast.

~~~
tgb
I'm familiar with the concept of broadcasting in Numpy, but I don't understand
what it means in this context. Can someone explain?

~~~
Marazan
__s basically covers it but my initial switch from pure python to NumPy just
involved changing my vector 'class' (just a tuple in reality) into NumPy
arrays. Whilst the basic vector multiplications and additions became way
faster the overhead of creating hundreds of tiny NumPy arrays was a killer.

So, just like in the original article instead of creating a single 1x3 array I
created a Mx3 array where M represented as many rays as I could fit into
memory at once (I have quite a weedy machine).

Due to how NumPy broadcasting exactly the same code for, say, subtracting the
origin from a ray vector works to subtract that single origin vector for a
multidimensional array of ray vectors.

~~~
tgb
I see now, thanks.

------
m00s3
Anyone besides me disturbed that one of the code samples had function that
took 3 parameters, 2 of which where 'O' and 'D'? I had to look at it a few
times before I realized those were different variables.

~~~
willvarfar
In 3D programs it's normal for O to be origin and D direction. It's a
convention you'll see in most codebases and it's completely undisturbing.

~~~
ci5er
Really? Since when?

It's been a long-o time since I did any 3d physics or rendering code (in C),
maybe even since before the WWW, but I don't remember this convention... (I
mean - sure - it makes sense, but I don't recall the two 3-space triples being
necessarily called that even in things like GL)

~~~
berkut
OpenGL doesn't do raytracing where you have a ray origin and direction though,
so you wouldn't have seen it there.

Using the full terms or shortening them to Dir and Orig are more conventional
in my experience.

It gets even more fun when you get to evaluating BSDFs for materials and you
have variables like wi, wo, and different people use them in different ways :)

------
Twirrim
Out of curiosity, I did a little digging around. I really need to take a step
back and actually understand what is going on fully with the code, but a quick
trot through cProfile showed a lot of effort hitting the dot method of vec3,
primarily via abs(self) under the norm method.

There's a useful library for python called numexpr,
[http://numexpr.readthedocs.io/](http://numexpr.readthedocs.io/), which can
speed up numpy operations, leveraging multiple cores etc. (and Intel's VML
library if you have it installed), one I've been aware of but never got around
to trying out.

At a quick stab, it seems like it can't take properties of classes? Either
way, modifying dot a bit:

    
    
        def dot(self, other):
            self_x = self.x
            other_x = other.x
            self_y = self.y
            other_y = other.y
            self_z = self.z
            other_z = other.z
            return ne.evaluate("(self_x * other_x) + (self_y * other_y) + (self_z * other_z)")
    
    

at 400x400 this slows things down a little. Once you get above about 800x800
it starts to draw equal. By the time you get to 2000x2000 it's shaving some
10% of the execution time.

edit: here's a quick stab at using numexpr at just the most obvious places,
without trying to consider major code refactoring. Note this bumps up the
resolution to 2000x2000

[https://gist.github.com/twirrim/64f523fd5e8be86eb392b90e9222...](https://gist.github.com/twirrim/64f523fd5e8be86eb392b90e92223157)

compared to rt3 (at same resolution), I knock off over 10% on this 2015 retina
mac:

    
    
      $ python rt3.py && python rt4.py
      Took 5.78933000565
      Took 4.9023668766

------
webkike
In college I built a raytracer in Rust, and I have to say it was one of the
most valuable learning experiences I have ever had.

~~~
tyingq
>In college I built a raytracer in Rust

I don't usually feel old around here. Every once in a while, however...

~~~
webkike
Yeah I say in college, but that was only a few months ago. College isn't that
long, I might as well have said "in highschool"

~~~
khedoros1
"In high school" was over 14 years ago, for me ;-) The start of "in college"
was about 14 years and one month ago.

And Rust is "only" 7 years old. Time slips by.

------
ricardobeat
Are the reflections in examples like this physically correct? My brain kind of
expects the floor to be strongly curved when reflected in the sphere. Maybe
it's just the unfamiliar, unrealistic environment?

~~~
dahart
Yes, for perfect mirror surfaces. The scene isn't very physically realistic,
but the reflections are doing the right thing given the scene. The floor is
strongly curved in the reflections. The reflection of the horizon line isn't,
only because the camera is near the floor and looking almost level. If the
camera were up higher looking down, the horizon reflection would be more
strongly curved.

------
CyberDildonics
The demos that come with embree do simple stuff like this in real time, which
would be about 450 times faster, so I wouldn't call this 'reasonably speedy'.

~~~
dahart
This one is close to real-time at 115ms. Maybe you missed paragraph 2?

~~~
gmueckl
You could pack the raytracer and the scene into a GLSL fragment shader without
any trouble whatsoever and achieve something greater than 30fps out of the
gate.

Check out [https://www.shadertoy.com/](https://www.shadertoy.com/) \- all of
this crazy stuff is generated using two triangles and a fancy shader. Almost
every 3D scene there is generated using some form of ray casting or ray
tracing on the GPU.

------
melling
I’ve got a list of random resources here:

[https://github.com/melling/ComputerGraphics/blob/master/ray_...](https://github.com/melling/ComputerGraphics/blob/master/ray_tracing.org)

------
gravypod
I'd be interested in how this compares to one optimized by Numba.

~~~
mathgenius
Yes, or perhaps go directly to llvmlite. Good wholesome fun.

------
make3
now do a Tensorflow version!

