Hacker News new | past | comments | ask | show | jobs | submit login
And now for something completely different: running Lisp on GPUs [pdf] (lboro.ac.uk)
69 points by espeed 19 days ago | hide | past | web | favorite | 8 comments



Looks like a graduate thesis project, where they ran out of time before they could really explore the capabilities of the system. Slightly disappointing that they seem to have implemented threading but not vectorisation, which would be more suited; does anyone have resources on auto-vectorising Lisp?


"CuLi and test applications will be published on the webserver of the Johannes Gutenberg University Mainz under https://version.zdv.uni-mainz.de"

but the link is dead. Anyone know where this can be found? It's a really interesting project that has practical implications for a little hobby project I'm working on.


are you sure about the practical implications? Quoting the abstract: "At the moment, Lisp programs running on CPUs outperform Lisp programs on GPUs"


You're right it seems - as written, the parsing overhead from the requirement to make the entire system run the GPU makes it impractical. A hybrid system could outperform CPU or GPU alone.

What I really want is a scheme->GLSL shader compiler - much narrower and more tractable.


This is likely unrelated, but your comment makes me think of clasp: https://github.com/clasp-developers/clasp


Check out Hypergiant (Scheme -> GLSL) http://wiki.call-cc.org/eggref/4/hypergiant

or

CEPL (common lisp -> GLSL) https://github.com/cbaggers/cepl


In case you haven't seen the applications of nanopass compilers to do similar things, specifically harlan, here you go: https://github.com/eholk/harlan


What would it give you to compile scheme to glsl? Glsl is already a pretty direct yet high level representation of shaders. Why put another layer on top of it?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: