Hacker News new | comments | show | ask | jobs | submit login

did you not try to do this on the GPU by any chance ? Especially using CUDA libraries. I'm wondering what is the state of art in this field.



Our current design isn't well suited to adaption to a GPU, because it branches a lot and the memory accesses aren't strided evenly. So we couldn't just plug our current code into a java-to-cuda compiler; we'd need to change the design.

So no, we haven't yet tested using GPUs.


Thank you for the answer. I was not aware that these kind of considerations impact the cpu vs gpu debate. I was under the impression that most cpu heavy computations can be ported to the gpu.


I take that it's a 'kind' of decision tree or random forest with gradient boosting, so usually it 'can' run faster on a CPU if optimized than on a GPU (if I'm not mistaken). That's at least what I get from BigML's offering.

edit: Can you provide insight?


It is not decision tree or any machine learning algorithm.

It is a local search algorithm that probably has simulated annealing or tabu search as a metaheuristic.

Although, they probably segment the deliveries to some common starting points and the problem size is reduced significantly - maybe to around thousand orders per starting point.

Research can easily optimize thousands of deliveries very effectively.


wow I'm intrigued and haven't even heard of tabu search before you mentioned it, thank you very much! You surely are my hero today loserboss =)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: