Hacker News new | past | comments | ask | show | jobs | submit login

Both of those problems are well worn and can scale to as many cores as we can put in a single computer.

Whether it is a navier-stokes grid/image fluid simulation, arbitrary points in space that work off of nearest neighbors or a combination of both (by rasterizing into a grid and using that to move the particles), there are many straightforward ways to use lots of CPUs.

Fork join parallelism is a start. Sorting particles into a kd-tree is done by recursively partitioning and the partitions can be distributed amount cores. The sorting structure can be read but not written by as many cores as you want, and thus their neighbors can be searched and found by all cores at once.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: