Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that the existing libraries are great, and if anything that success is what we to encourage even more of. We want to build a community around algorithm development, and have a central place for developers to share their work, and collaborate with other developers to make Huge library of algorithms so that everyone can benefit.

The first language we implemented was Java/Scala, largely because of the excellent libraries like Mahout and Weka easily available in Maven Central. But even though the code is there in maven, it is still not trivial to deploy those algorithms as a service. We are trying to make that easier, while at the same time supporting the developers of those libraries in the process.

Being able to run the code locally is in the plans. For the most part, the algorithms in our system are already simply maven packages, or ruby gems, npm packages, etc. Being able to distribute them in binary form is definitely possible. We're still figuring things out right now, but our focus is always on doing right by the algorithm development community.




This doesn't make sense to me. You display Dijkstra as an example on your homepage. Imagine I want to use Dijkstra in an app that needs critical performance and calls the function 1000000 times per minute, making this an API rather a library just breaks the whole point of centralizing the algorithms.


Plus, the choice of Dijkstra algorithm implementation depends on the graph size, type, and even CPU. See http://www.cs.sunysb.edu/~rezaul/papers/TR-07-54.pdf , with 10 different implementation variations and several different benchmarks.

The 9th DIMACS Implementation Challenge from 8 years ago (see http://www.dis.uniroma1.it/challenge9/ ) served as a way to gather the best algorithms for the shortest path problem. I don't see how this project can be significantly better, or even close to, a traditional effort like that.


How many of those shortest paths algorithms can you run right now?

It doesn't have to be better at producing good algorithms. It just has to be better at making those algorithms implementations easy to use.

But I still don't think the business model is viable.


How many do I need to run?

The Boost Graph Library includes three different shortest path implementations, and I have that already downloaded. That library likely fits my needs.

In the DIMACS challenge papers at http://www.dis.uniroma1.it/challenge9/papers.shtml you'll see "Single-Source Shortest Paths with the Parallel Boost Graph Library".

Indeed, the Parallel Boost Graph Library library is easily available, and contains two parallel shortest-path implementations.

The flip question is, what if it doesn't fit my needs? Well, then the question is "what do I need"?

The other DIMACS challenges include 1) pre-computation, which is useful for road-based graphs, 2) supercomputer parallelism to search billions of nodes, 3) out-of-core graph searches, for when you don't have enough RAM, and 4) k-shortest path search.

It's very unlikely that this proposed algorithm service provider will implement these alternate algorithms.


cool, good luck!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: