This is a bit awkward for me as I've paused the development of HLearn and emphatically do not recommend anyone use it. The main problem is that Haskell (which I otherwise love) has poor support for numerical computing. I've tried developing an alternative standard library to improve the situation (https://github.com/mikeizbicki/subhask), but Haskell's type system isn't yet powerful enough to do what I want. I'm sure the type system will have the needed features in 5-10 years, and I'd rather wait and do it right.
If you have any questions, I'd be happy to answer them.
Most readers seem to be misinterpreting Mike as anchoring off other popular programming languages of today, whereas he's looking for language features for which there's (a) no consensus that they'll actually be good when they exist, and (b) don't yet exist. (I'm highly skeptical of dependently typed programming.)
I think that there's a case to be made that numeric programming in Haskell, relative to the state of the art of today rather than the year 2100, really isn't so great – but my concerns are very different than Mike's, and revolve around libraries rather than type system features.
Source: have done a bit of Haskell in my day.
I do think that matlab/python are a bit better numerical programming languages than Haskell as-is, but only marginally. This is not just due to the library ecosystem, but also because I think that dynamic languages really are better than the best Haskell2010/GHC8.2 library theoretically possible. There are just some things that the existing type system makes a bit more awkward.
I think it's a big shame that traditional AI and computer-sciency languages like Haskell and Prolog have lagged so far behind the mainstream ones in terms of machine learning and as machine learning gets more popular I'm worried this will cause them to fall by the wayside even more than they have already.
What is it that's making Haskell bad at numerical computing? I would have thought it's not much worse than e.g. Julia or Python but even if it is, I always figured there's other benefits to programming in Haskell- otherwise we'd all be geeking over FORTRAN, I guess.
With Prolog the big issue is that statistical AI algorithms tend to go a lot faster with mutable, indexable data structures and those don't have a lot of support in Prolog. What is it that's really bothering you with Haskell? Could you give an example?
[Note: I'm a Haskell noob, but I should be able to handle code examples]
What still needs to be done? Does Idris have enough of the power that you need?
For example, I want the compiler to automatically rewrite my code to be much more efficient and numerically stable (see the HerbiePlugin to the GHC compiler which goes this https://github.com/mikeizbicki/HerbiePlugin). My understanding is that the Idris compiler gets much less engineering work done on it (outside of the type system), and so getting efficient running code will be too difficult.
1. Graph structures are notoriously difficult to model in functional languages.
2. The software-engineering side of deep learning is not all that difficult (e.g. using Keras is quite simple).
>Grenade layers are normal haskell data types which are an instance of Layer, so it's easy to build one's own downstream code. We do however provide a decent set of layers, including convolution, deconvolution, pooling, pad, crop, logit, relu, elu, tanh, and fully connected.
it's called a README for a reason.