Hacker News new | past | comments | ask | show | jobs | submit login
Taking ML to production with Rust (lpalmieri.com)
195 points by LukeMathWalker 5 days ago | hide | past | web | favorite | 99 comments





I was a little surprised to see that instead of directly benchmarking prediction performance on a local machine, author chose to go through that complicated web-server route. I saw author saying this is nothing more than a remote RFC, still, a benchmark should try to remove unrelated factors as much as possible. I would be interested to see how the results compare when this web-service layer is removed.

Serving model predictions via an API is a pretty common usecase for putting data science models in production. If Python's web service layer is slower than Rust's, that's a real concern even if it's not directly related to model prediction.

Although I am a python guy, if I had to bet on an alternative language here - its going to be Swift.

The amount of evangelism money that Google is spending on Tensorflow is staggering. Its not just t-shirts, but an entire ecosystem - TensorBoard.dev, Tensorflow Privacy, TF Enterprise, Explainable AI Beta, free TPU on Google Colab etc.

And all of them will soon be backed using Tensorflow Swift (https://www.tensorflow.org/swift). This is MNIST for Swift Tensorflow - https://github.com/tensorflow/swift-models/blob/master/Datas... . Mind you this is runnable on Google Colab using TPU hardware acceleration in swift.

Swift vs Rust is a very political thing...but previous discussions on HN have been tilted in favor of Swift [1]

And we havent even touched on the elephant in the Room - Swift may soon hook into OSX Metal (which is arguably the only way to do GPU AI on macbooks now that the nvidia-Apple CUDA divorce is official [2] ). You can already call Metal from Swift...but not write shaders directly IMHO.

[1] https://cloud.google.com/explainable-ai/

[2] https://gizmodo.com/apple-and-nvidia-are-over-1840015246


>And all of them will soon be backed using Tensorflow Swift

Lol at this. It's ~5 years before S4TF has feature parity with TF let alone pytorch. I watch their design meetings almost every Friday (they're public) and not only is the core autodiff runtime nowhere near 1.0 (just a couple of weeks ago they were talking about replacing their bespoke system with XLA) there's almost no differentiable function type library (nn.functional). Not to mention ecosystem (there are probably, at most, 20 people, including the S4TF team, consistently using S4TF). This is not to say that they're not doing great work but that's it's far off.

Edit: I forgot to also say they're going through the Swift evolution process (since they're building the autodiff into the Swift compiler) and it's completely unknown whether it'll be accepted. I have no idea what will happen to the project if it doesn't but if they end up having to maintain their own Swift runtime then S4TF is dead in the water because no one will be comfortable deploying two runtimes and even then no one will trust Google to maintain.


As one of those 20 people, I feel like I'm in good company!

Five years is pessimistic. The core library is still in a state of flux, but the team is getting the fundamentals hammered out. Once the foundation is in place, adding higher level apis should be relatively straightforward.

Re: XLA/MLIR, s4tf is actively trying to coordinate with other groups within google. Whatever short term losses there are in time/energy will be paid off down the road when the project can piggyback off other people's progress.

Finally, while the swift evolution process isn't going as quickly as would be desired, the changes needed to merge the tensorflow branch have been identified and are being worked on. It's just a matter of time till the two projects are in sync.


Thank you for your work. Although I still wish you guys had done Kotlin instead of Swift.

I haven't seen people love a language without evangelism like Kotlin. And you already had the world's biggest ecosystem (because of Android).

And Kotlin native is pretty good. i daresay you would have had better control of the compiler there.


How can I audit the Friday design meeting?


>And we havent even touched on the elephant in the Room - Swift may soon hook into OSX Metal

Not sure whether I understood your statement correctly, but you can use Swift for using metal API already. Swift is not the bottleneck for TensorFlow, PyTorch like open-source machine learning library appearing for macOS based on metal, it is the closed nature of mac ecosystem itself.

Look at the flak Google, Facebook get regarding these projects in-spite of them being open-source. If an equivalent metal (only) based open-source machine learning of this scale did pop up with Apple's help; it may not go well with the ML/DL community as sharing & collaboration is valued high here. Apple has even started to publishing open ML papers to attract talent.

Then again, Apple doesn't get as much scrutiny as Google/Facebook does with the tech community, take a look at any HN thread on GoLang/Swift; the discussion on the former goes to the company rather than the programming language as in the case of latter in-spite of both being open-source.


I'm aware that you can hook into metal using swift already. I meant the shader language (which is a variant of c++), though there are issues and proposals to make it swift centric.

When first announced, I was very upbeat on S4TF and the XCode and Swift drops with TF and some examples more or less worked and were interesting.

Since the first early release, which largely worked for me, subsequent releases have generally not worked for me. I even had much more trouble with S4TF than, for example, the Haskell TF bindings that always take a bit of setup since I don’t use the provided docker setup.

A little off topic, but, the Julia Flux ML library is worth looking at if you wanna “turtles all the way down” setup.


I don’t know why you are painting this as Swift vs Rust when they are apples to oranges. Why so defensive? They’re drastically different languages operating at different abstraction levels with different (though overlapping) use cases, with different motivations and design goals.

While Swift may be the "latest and greatest" language with official support, if you look at https://www.tensorflow.org/api_docs, it's only one of several (the others being C++, Java, JS, Go and of course Python). Plus, there are community-supported bindings for C#, Haskell, Julia, Ruby, Rust and Scala). Just to make the picture complete...

Swift for tensor flow aren't bindings - it's a ground up rearchitecting of tensorflow

For Mac OS only apparently.


Having a package available for a specific OS doesn't mean it is usable, specially when it still requires conditional compilation to import either Darwin or Glibc, instead of Foundation.

And I don't see any Windows there.

Then there is the whole set of libraries that only compile on Apple platforms, no IDE, no Playgrounds and so on.


> it still requires conditional compilation to import either Darwin or Glibc, instead of Foundation

Well, swift-corelibs-foundation has made a lot of progress, and strictly speaking, to a certain point you don't actually want to hide some of the implementation details from the developer. This isn't Java - natively calling out to C is a thing that Swift can and will do.

> I don't see any Windows there.

Look harder. Swift for Windows: https://github.com/compnerd/windows-swift

> no Playgrounds

Swift Playgrounds: https://github.com/save-buffer/swift-repl

> no IDE

official VSCode work: https://github.com/apple/sourcekit-lsp/ + https://github.com/apple/sourcekit-lsp/tree/master/Editors/v...

> and so on

Do tell. What other concerns do you have? You seem deeply misinformed. While Swift may not yet have an ecosystem outside of Apple, it is factually incorrect to say that it is not usable outside of it.


>specially when it still requires conditional compilation to import either Darwin or Glibc, instead of Foundation.

you don't need to compile from source - binaries are distributed (if you actually read my link instead of assuming it was just github source you would see that).

>And I don't see any Windows there.

yes they're aware of this and it's being worked on

>Then there is the whole set of libraries that only compile on Apple platforms, no IDE, no Playgrounds and so on.

you're moving goal posts in order to be able to have an axe to grind.

your comment was

>For Mac OS only apparently.

the fact that there's a runtime that runs on linux shows that that was incorrect.

>set of libraries that only compile on Apple platforms

i don't know what you're talking about here - s4tf libraries compile on all platforms e.g. https://github.com/eaplatanios/swift-rl

>no IDE, no Playgrounds and so on.

i don't see how xcode being developed by apple (not the s4tf team) has anything to do with whether s4tf is multiplatform.


Not moving goalposts at all.

Swift is only usable in Apple OSes.

Making a tiny subject of its ecosystem available outside of Apple platform doesn't make it usable for something that pretends to replace Python ecosystem on data science.


And R has Tensorflow bindings too. https://tensorflow.rstudio.com/

Microsoft also provides Tensorflow bindinds via ML.NET.

So any language that can target .NET is also able to plug into it, and I certainly would pick F# over Swift.


> Microsoft also provides Tensorflow bindinds via ML.NET.

Well Swift for Tensorflow is more than just a binding to begin with. The difference in maintenance is that Google officially supports Swift whereas Microsoft would be always catching up to keep theirs updated.

> So any language that can target .NET is also able to plug into it, and I certainly would pick F# over Swift.

I can use both Swift for my iOS app (inferencing), training is also possible thanks to Python interoperability (@dynamicCallable), soon to be replaced by native C++ interoperability.

If I chose F#, as much as you may think it is a good choice unless your stack/team is using Microsoft technologies, I see this as a potential sunken cost here and I certainly wouldn't use it for this case and would rather rely on the active maintenance of the Swift/Tensorflow team at Google to improve this rather than using another great language for an unsupported use case.


The big difference being that I can use ML.NET on macOS, Linux, Windows, with Tensorflow lite on the roadmap. Whereas Swift, pretty much is Apple only.

Even your examples are a proof of that.

Zero persons from Swift/Tensorflow team at Google are working on improving Linux/Windows support.

Linux was been mostly volunteer and some IBM contributions.

Windows are just volunteers, and have restarted the port multiple times.


I had the feeling Swift was much higher level than Rust and it also didn't bring much new to the table.

I think the OP is making that point that Python was successful because of how relatively simple and high-level it is, thus Swift has potential in that area for the same reason. Rust is hardly approachable by "the masses", certainly all the ML-first engineers who are not as hard-core on the programming mindshare, hence why Python became king.

Everything about TF on Swift says zombie project.

I'll be amazed if it ever gets any significant foothold in the ML world.


This is really new to me, I always thought of Swift as the iOS thing to look into when doing something there.

Very interesting that Google is pushing Swift for their Tensorflow Platform. Any ideas why Swift is of interest for Google?


You can read why Tensorflow team chose Swift here[1].

[1]https://github.com/tensorflow/swift/blob/master/docs/WhySwif...


Swift playgrounds seem very much aligned with notebook style ML development. They're in many ways a much more powerful version of notebooks.

https://www.apple.com/swift/playgrounds/


They hired one of the Swift leaders for the tensor flow team

I whish it was kotlin instead of Swift. But as much as I want to move away from python, there are reasons to stick with it (ML libraries and low entry barrier for non programers)

Swift for writing Metal shaders? That's the first I've heard of this, and honestly I find that doubtful but what do I know.

> The amount of evangelism money that Google is spending

Same for Rust, Swift, Go... it's like we are in another bubble like the dot-com and everything is hype driven.


Although I am not on the Rust team and do not speak for them, I have enough familiarity Rust's funding situation to assure you that there is no budget available for developer evangelism. :P

Well... not everything, Rust is actually good. Just sit back enjoy the effects of the investment in this stuff, and don't worry about other people being foolish with their money.

Even Go is not that bad tbh. It's basically Java/.NET done right, and that makes it quite appropriate for some uses.

Not it is not.

The only thing that is has done right versus Java is having AOT compilation since the begining (.NET always supported it via NGEN).

Not to mention that Java and .NET are platforms for programming languages, and already there Go loses big time versus what we have available on our toolbox.


.NET done right? haha really? I find Go to be an order of magnitude less productive than C# and more complicated to get up and running with actually building useful things. Go over-complicates things which C# and .NET make easy while also having less readable code and less robust tooling.

I still have no idea how to get/run/make/compile/deploy Go stuff. Sure, there's some GOPATH, some packages, dep? go get?

I have no idea how, but 1-2 years ago when Kubernetes was new I tried it, and somehow got it to build (obviously I must have been using the stuff from the README) - but I haven't learned it.

Whereas with Rust? It's cargo. Done. That's up. Or rustup if you don't have anything yet. With Java? It's IntelliJ or the OpenJDK.


Yeah, vendoring in Go can be a bit frustrating. One of the great benefits of the JVM is how dependencies are isolated to the project so nicely, bundled into the jar, and everything "just works" no matter where you take that jar. That's one of the most liberating aspects of working with Java/Kotlin/Clojure, etc.


I haven't encountered it so far.

Last time I fiddled with something Go based was wit cfssl (cloudflare's excellent PKI thingie), and that uses something else. I don't remember what, it's in the GitHub Action YML file now. (Which alas turned out to be a big hot air pipeline :// )

I plan on learning a bit more about Go, but last time with the 2.0 proposal and the go mod/dep thing it seemed like there will be some streamlining, so I decided to postpone the immersion into the Go-cean.


Yeah, every time I have tried to get a Go project set up or built from source I end up throwing in the towel 15 minutes later in frustration. Never had that kind of experience with any other language

It's not obvious to everyone yet but the modern language that will become the most mainstream is obviously Kotlin. It has almost all modern language features, a ton of justified syntax sugars, and is 100% compatible with the Java ecosystem making it the most production ready among "modern languages"

The only other language with such a property is to my knowledge, typescript.


Good!

I think Rust will eat more and more of the stack.

It has similar performance as C/C++, but feels more higher level. It's a drop-in replacement for C and is developed by a company that devs like.


If it is a "drop-in replacement" the rust compiler should be able to compile C code - it can't.

Rust is not a drop-in replacement for C, it’s a drop-in replacement for C++.

It's a drop-in replacement for neither since it's an entirely different language. C++ went to some great pain to be a drop-in replacement for C; you can "drop in" a C++ compiler in your toolchain and still compile your C code (mostly).

This was kind of true till C99, but not so much anymore. restrict is the most obvious offender but far from the only one[1].

[1]: https://en.m.wikipedia.org/wiki/Compatibility_of_C_and_C%2B%...


The 25x speedup is because the gRPC python implementation is very slow [0].

[0] -- https://performance-dot-grpc-testing.appspot.com/explore?das...


Yeah, by skimming the article, it sound like making gRPC microservices 25x faster, not ML.

Of course the actual ML is not going to be faster in Rust, because the main tensor operations as used from Python are already written close to the metal. The cost of these operations is 99% of what it takes to run ML. Optimizing the remaining 1% is just not going to have a noticeable effect.

I've spent a lot of time writing python ML code, and I think 99% (of time inside fast C/cuda code) is much higher than most programs achieve. Off hand I'd say 80% is average and 90% is good (usually comes after a performance refactor). Also, one aspect that people often overlook is that optimizing compilers can often combine or vectorize operations in ways that are an order of magnitude faster than the same operations performed consecutively in, say, pure numpy. in that scenario, even though each numpy operation is very fast for what it is, the python program that combines a bunch of them ends up running much slower.

gRPC would account for very little time relative to the k-means algorithm. I think what results in the large speedup is the "embarrassing" parallelism that an async Rust server (gRPC in this case) would give. Knowing the CPU time during the tests could confirm this. I'll try run the benchmarks on my machine this week to verify my assumption.

I've hit some degenerate cases on gRPC serialization. The golang grpc/proto library accounted for something like 40% of cpu time doing serialization on mapping types for an API I developed under load test. Changing that to a list of key/value pairs dropped that 40% to <1%.

The gRPC python sever implementation does a lot of work using python builtins, and doesn't use recent asyncio primitives. It uses built-in python thread-pool execution, which means you also have the GIL to worry about: https://grpc.io/docs/tutorials/basic/python/#starting-the-se...

Given that, I would expect its connection / request management to be pretty darn slow. gRPC has better support for Python as the client. The thread pool execution model also prevents you from doing a number of things you might want to do with gRPC servers (long-lived request streaming is out of the picture).

There are a couple third party packages out there trying to make this better (e.g. https://github.com/vmagamedov/grpclib) but I hit compatibility issues for a few cases when trying to use them for some prototypes.


I think it is also worth examining what type of production load and its importance within the business.

For mission critical production usage, I think to use high performance systems/languages is pretty good starting point. With the assumptions that you update your model conservatively(not often), there are enough engineering resources to maintain and 'port' models from python, since most data scientists are trained with python. I think in this type of workload and context, it makes sense use Rust for deployment.

Another type of workload I think it is much more common. It is the experimental projects that data scientists are trying to discover does those have enough ROI to be part of the production system. Those projects and deployments are requiring a quick turn around on iteration cycle. I am not sure Rust or even Swift are good tools, when typical data scientists are not well versed in those. Not to mention, usually in this setting, they don't have a lot of engineering resources they can use. Python is still the go to option for this type of work.

I think the article has the right intention, speed up ML production. For the experimental work setting, I think we can have the cake and eat it too. Data scientists, still use python and generate production ready deployment service without help from engineers.

We create an open source python lib/platform called BentoML(www.github.com/bentoml/bentoml). BentoML makes it easy to serving and deploying ML models in the cloud, from ML model to production API endpoint with few lines of code. You can try it out at this Google Colab notebook(https://colab.research.google.com/github/bentoml/BentoML/blo...)

BentoML works with multiply ML framworks(Tensorflow/fastai/pytorch/etc) and could generate different distribution formats (docker/AWS Lambda/CLI/Spark UDF) for your serving need. We also support custom runtime backend. Feel free to ping me or ask questions in our slack channel. We are pretty active there.


Nice article!

Rust can never replace Python as a front-end because it's compiled and not all that flexible or user-friendly.

Rust can't currently replace C or C++ at the back-end because it doesn't have good enough support for GPU computing yet.


"The Rust ecosystem is indeed rich in ML crates - just take a look at what a quick search for machine learning on crates.io returns. No need to go and rewrite everything from scratch: I picture linfa as a meta-package, a collection of curated algorithm implementations from the Rust ecosystem. The first stop for your ML needs, as scikit-learn for Python."

Rust doesn't have to replace anything. It can be used to create complementary packages especially ones that use C/C++. The point of introducing Rust into your ecosystem is for safety + performance which is hard to achieve without discipline.


Right now, NVidia is designing their GPGPUs to run C++ code.

https://www.youtube.com/watch?v=86seb-iZCnI

https://www.youtube.com/watch?v=VogqOscJYvk

With Intel just releasing oneAPI (aka Data Parallel C++) last month, and Khronos pushing SYCL as alternative to CUDA C++.

It took 30 years for C++ to reach this stage, and in some domains (like embedded) it is still fighting for relevance.

That is something to keep in mind whenever to advocate Rust as replacement for Domain XYZ.


If the GPUs can run C++ code, it is likely that they'll be able to run Rust code as well. If there is an LLVM target for it, Rust usually runs on it.

Watch the second talk, it is all about making CUDA memory model work like ISO C++ memory model, a 10 year long project.

Rust has yet to define a memory model.


Sorry, I tried searching for what you mean by "ISO C++ memory model", I can't find authoritative usage of that term. I find a bunch of references to fences and std::atomic, both ideas of which Rust has supported from the start.

I also can't find any details on the CUDA project you mention, it seems counter-intuitive to me. At least with graphics data, it's mostly just large buffers of structs, defined with vertex attrib pointers or whatever. What advantages would a more complex memory model yield?


Here are some references about the C++ memory model, introduced in C++11:

https://arxiv.org/pdf/1803.04432.pdf

https://people.mpi-sws.org/~viktor/slides/2016-01-debug.pdf

A language memory model is a mathematical specification on how the language semantics map to actual CPUs and the basis how to write lock-free algorithms.

If you bothered to follow the YouTube videos you would found the CUDA projects I mentioned.

Rust currently doesn't have such mathematical model.


>If you bothered to follow the YouTube videos you would found the CUDA projects I mentioned.

Dude, you linked to two hours worth of video. Give people a break.


I don't give a break to people that are too lazy and then issue statements like "I also can't find any details on the CUDA project you mention".

Just clicking on the links would show the info.


> What advantages would a more complex memory model yield?

consistency


Just because it might be possible doesn't mean that it's worthwhile at all.

> Rust doesn't have to replace anything. It can be used to create complementary packages especially ones that use C/C++.

I'm not sure that I understand correctly - are you suggesting that people use Rust as an additional layer between Python and C++?


The original article is proposing that the existing C and C++ libraries be rewritten in Rust for performance reasons (while continuing to use Python to access the functionality).

I guess you meant memory-safety reasons because Rust is attempting to be as fast as C and C++, not the other way around.

Depends on the domain.

For single threaded applications, as fast is the goal. There is potential for it to be faster due to stronger restrictions around pointers.

Multithreaded code is slated to be faster mainly because rust provides safety for patterns that would be difficult to do right in C++ (see servo css handling).

That is to say, you could do the same thing in C++, but getting it right is harder to do.


Concurrency patterns that are difficult to execute in the decades old codebase of Firefox, which also happens to be a browser aka nightmare of complexity.

The same environmental conditions don't apply to other software and it's perfectly possible to write blazing fast multithreaded software in C++. In fact that's SOP.


I've heard this suggestion fairly frequently - that people use Rust as a wrapper over C++ bindings to add in safety to the C++ underneath.

Mostly regarding using BLAS etc in Rust rather than C++. It seems a little pointless as the unsafe nature is within the underlying code, so it only santises input/output.


For that use case it is more productive to actually use static analysis tooling for C++.

"people use Rust as a wrapper over C++ bindings to add in safety to the C++ underneath."

Adding a thin wrapper over an insecure library won't magically make it secure. That's why pure Rust or pure Swift versions of various algorithms and functionalities are worth so much - guaranteed memory-usage correctness.


What’s wrong with being compiled?

In python I can write code one line at a time and they are executed immediately. This means that I can write code to print the data if I want, or try different things etc without ever rerunning the program from scratch which can be very slow for training data.

Here, have a go at it with C++.

https://blog.jupyter.org/interactive-workflows-for-c-with-ju...

Being compiled doesn't mean anything, it is a matter of tooling.



pretty cool. i wonder how well it works though

I use the evcxr repl all the time. It has some rough edges but it's incredible to finally have a rust repl that's functional. I had been using rusti on an ancient rustc. One thing I really like is that you can add a crate with one line (`:dep rand = "0.7"`).

Rust will never work except in niche areas because it doesn't allow for easy experimentation and does not provide an iterative workflow like Python, R, or Julia.

I think the future is Julia, as it's as easy to write as Python, is interactive, and can be compiled for fast speeds!

It will just takes time for Julia to have as many libraries, but at some point the tipping point will be reached, and Julia will just eat Python's lunch.

Python was dying before the latest wave of data science and machine learning craze saved it anyway. There are pretty good reasons to believe Python (and Ruby) will slowly wither away. I am careful not to savy that they will die out, because they won't as some computing problems do not require so much speed and hence Python and the like can still pass mustard. But in a post-Moore's-law-world, data is getting bigger, the demand for computing is larger, but programs aren't getting faster, and there is a limit to how much you can optimise Python. Eventually, a crop of speedy and easy to write programming languages will rise up and eat Python's lunch. The early signs are there with tools like Julia.


> Rust will never work except in niche because

It warms my heart to see people don't read TFA. Like at all.

From TFA:

> But it was important to clean the air of any possible misunderstanding before getting started. I don’t believe Rust will replace Python as the language of choice for ML - it just isn’t happening, it’s not a thing, neither today or tomorrow.


What's wrong in expanding on a topic that was barely mentioned in the article?

Your parent wrote detailed reasoning that's not present in the article.

Also this type of uncharitable comment is against Hacker News guidelines: "Please don't comment on whether someone read an article".

http://news.ycombinator.com/newsguidelines.html


There can be no discussion with GGP since he obviously didn't read the article. They think the article will claim Rust supplants Python. They think there is a performance advantage in using Julia instead of Python (but the Python code is just calling C, C++, and Rust as described in the article). GGP thinks there will be a tipping point where Julia has more libraries than Python but this is also addressed in the article.

There is no expanding on topics barely mentioned in the article. As far as GGP is concerned there was just a headline and they Kanye'd with "Julia is the best". As far as I care, mods can delete the comment and subtree.


That may be true in data science, but the tradeoffs are not necessarily the same in other fields. I do not see Python fading away in mine, for instance. If anything, it looks more and more encroached, and even if the Python code becomes legacy code, interfaces will have to be maintained. I still use TCL quite often in design automation. Python is and will still be used to script Blender, Kodi, Inkscape, Klayout, sigrok, and countless others, for instance.

But your week-end project you need to write a bit of code for and want to practice a new language with/just want to code as quick as possible? Sure, the languages will come and go.

Likewise with Rust. Maybe not for iterative design phases, but I can certainly see it being used in production, especially if robustness/speed are needed.


I like Julia, but let's be real. We are a long way away from Julia being anywhere near Python when it comes to usability. Error messages, code structure and readability and documentation are afterthoughts rather than a central focus of the community.

Because the core is so brilliant at what it does I am sure that Julia will get there eventually, but it will be a painful way.

For crying out loud, we still don't have a sory for people to save their data...


> For crying out loud, we still don't have a sory for people to save their data...

Can you elaborate on what you mean by this? Does Julia not support writing files?


It does, of course, but serialization is problematic (with the combination of Type inference and dynamicism this is maybe to be expected).

Beyond that there is a bewildering array of packages, and not really a clear consensus for which to use:

https://discourse.julialang.org/t/what-is-the-preferred-way-...

JLD2, which for a while seemed like an emerging consensus option has caused massive dataloss for a number of people:

https://github.com/JuliaIO/JLD2.jl/issues/55

The issue is also that Julia packages often use their own container types for convenient analysis. For example the solution type of DiffEq has a number of important features like interpolation built in. If I want to reliably save the data I got from the numerical integration, and don't want to rely on brittle/broken serialization packages, I have to extract the underlying arrays though. That would be fine if not for the fact that there is no documented way (that I've found) to initialize the solution objects from this raw data, so now I have lost access to the convenience functions.

Don't get me wrong, this is all growing pains. I am sure this will be sorted out eventually. But it's important to point out how the intersection of features that enable Julia to do things that are unheard of in other languages (e.g. DiffEqFlux, Zygote) also mean there are some things that still need to be figured out for this language (serialization, useful error messages, interfaces).


JLD2 works with the DiffEq solution and as of last week has a regression test so it will be known if call-overloaded types have an issue in the future. Now that there is less code churn, I suspect this to continue to be supported (the break came from the frantic v1.0 updates).

The dataloss issues are not related to this specific issue though. They are purely JLD2 and about the use of mmap which is problematic in hpc contexts. So exactly when you have to save data that is expensive to produce. See the linked issue.

The only workaround is to use an undocumented alternative interface for working on the files. A pull request to at least inform people about the issue has been lying around for almost two years.

I am happy that activity has picked up again, but don't intend to recommend the use of JLD2 for expensive data for the foreseeable future...


Yes, Julia makes some impressive stunts - supercomputing [0], ml where code is very close to mathematical description [1], code reuse [2] etc.

[0] https://juliacomputing.com/blog/2019/04/12/Supercomputing-ju... [1] https://www.youtube.com/watch?time_continue=188&v=9KBaRS2gy-... [2] https://www.youtube.com/watch?v=kc9HwsxE1OY&t=130s


It's not like Python's community is done writing libraries and improving libraries. If there is going to be a tipping point it's going to have to start happening before the Julia package ecosystem catches up to Python's.

I am also betting on Julia.

What do you think about Swift

It won't go anywhere, Tensorflow for Swift only exists because of Chris Lattner.

Many researchers work on Windows, Swift has zero support there and when asked about it, they just kind of invite the Swift community to provide support.

So zero interest from either Apple or Google to make it work on Windows.

This is the latest status of Swift on Windows,

https://www.youtube.com/watch?v=Zjlxa1NIfJc

Then on Linux, Foundation support is still flaky, for basic stuff one still needs to do conditional compilation and directly import either Darwin or GLibc modules.

Swift is just like Objective-C, yes you can kind of use it outside Apple's ecosytem, but at expense of productivity and lack of adequate tooling.

Really I don't expect Tensorflow for Swift to ever reach mainstream adoption, maybe Google can sell it as Mac only and thus announce the project has having been a success.


Too verbose. Does not even support Windows. Too closely related to iOS at least in image. MLIR sounds like a nice idea but isn't really Swift specific. I don't understand why they decided to do Swift for Tensorflow, should have invested in Julia for Tensorflow



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: