Hacker News new | past | comments | ask | show | jobs | submit login

Yes, Julia has a lot of benefits in some regards.

The community is great. But small. For a lot of situations, I'd be hesitant to invest in Julia, because I don't know if the community will stay that way or if it fades away.




Out of curiosity, how would you know that the community is large enough, or committed enough? For example, while Julia has been in development for almost 10 years, a lot of the community has now been around for 5 years. There's about 2,500 Julia packages, with the ability to call C, Fortran, R, Python, Java, etc. All the key community stats based on downloads, website views, videos show a healthy growth every year.

While in absolute numbers, we may be at 20% of R or Python communities, I am always curious to understand what people mean when they say the community is too small. What would be a signal that a particular community is big enough?


For me as long as a core group appears to be active I’m fine with a communities survivability. Julia’s data-science and plotting have continued to improve in terms of documentation and feature parity, both are critical in an immature ecosystem as they indicate an active core group of developers. Also many libraries appear to be driven by academics creating cutting edge libraries or developing "workhorse" libraries. One good example is Steven G Johnson’s involvement in Julia [1,2], since he created the FFTW library and NLOpt I’d put him in the category of ‘prolific data science contributor’. Or are take the Julia GaussianProcesses.jl [3] library which has a surprisingly thorough implementation along with academic research (and its citable!) for speeding up GP fitting. Pretty cool! Plus it’s pretty performant to say use Otim.jl to optimize the "hyper parameters" for a set of GP’s. That enables a lot more iterations of data exploration.

Essentially the base ecosystem of a language is driven by a core group of contributors and the derivation an ability of that group matters more than most other factors. When doing scientific and or data science I personally care more about the core quality and what the platform enables. Lately I’ve considered learning R as it has a lot of well done says which simply aren’t available in Python, and aren’t ready yet in Julia. Last time I tried to calculate a confidence interval in Python for an obscure probability function I ended up wanting to pull out my air in frustration. There’s libraries that kind of handle it in Python but they are (we’re?) nigh impossible to modify or re-use for a generalized case. Much less getting a proper covariance matrix with enough documentation to know what to do with it. I used R examples to figure out the correct maths. R’s NSE seems appealing in allowing generalized re-use. I’ve had similar ability to re-use library features in Julia for solving problems outside that libraries initial scope.

1: https://en.wikipedia.org/wiki/Steven_G._Johnson 2: https://discourse.julialang.org/t/steven-johnson-as-a-juliac... 3: https://github.com/STOR-i/GaussianProcesses.jl




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: