Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ahh I getcha a bit. What is the usecase though - which script-type workflows? (genuine curiosity, not just asking!)


Typically modeling timeseries data and comparing it to data collected from humans (e.g. car telemetry, motion tracking or eye tracking signals). The models are stochastic dynamical systems which have to be optimized against the data. It's more or less exactly in Julia's niche.

The script typically loads the data files and runs simulations (a lot of times when their parameters are optimized). This means the models have to be fast, but they are also recurrent in nature so they can't be vectorized to numpy. So depending on the case it's usually numba, cython or C++, all of which can be quite painful.


So what you are saying is that you would like a process that was live - with everything loaded, and ready to respond to any code that you wanted it to run, as opposed to having to bring up a new Juila process for each step in the pipeline.

Can't you compile the pipeline into a single script and then run it in a single instance of whatever?


I want the opposite. To have different processes for each step. Potentially the different steps being in different languages.

Or at least semantics of that. The steps should be independently callable and each call should result in the same output with same parameters, i.e. the state would reset for each run. I don't care so much how this is exactly implemented as long as it fullfills these.

The data doesn't need to stay in the process memory. It's can be e.g. mmapped and is cached by the OS anyway. And deserializing tens or even hundreds of megabytes takes usually less time than Julia takes to import Plots.

This is the thing that isn't feasible with Julia due to the TTFP (each step incurs the startup latency).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: