
And now for something completely different: running Lisp on GPUs [pdf] - espeed
https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/35128/1/main.pdf
======
pjc50
Looks like a graduate thesis project, where they ran out of time before they
could really explore the capabilities of the system. Slightly disappointing
that they seem to have implemented threading but not vectorisation, which
would be more suited; does anyone have resources on auto-vectorising Lisp?

------
dTal
"CuLi and test applications will be published on the webserver of the Johannes
Gutenberg University Mainz under [https://version.zdv.uni-
mainz.de"](https://version.zdv.uni-mainz.de")

but the link is dead. Anyone know where this can be found? It's a really
interesting project that has practical implications for a little hobby project
I'm working on.

~~~
actondev
are you sure about the practical implications? Quoting the abstract: "At the
moment, Lisp programs running on CPUs outperform Lisp programs on GPUs"

~~~
dTal
You're right it seems - as written, the parsing overhead from the requirement
to make the _entire_ system run the GPU makes it impractical. A hybrid system
could outperform CPU or GPU alone.

What _I_ really want is a scheme->GLSL shader compiler - much narrower and
more tractable.

~~~
jakeogh
This is likely unrelated, but your comment makes me think of clasp:
[https://github.com/clasp-developers/clasp](https://github.com/clasp-
developers/clasp)

