Hacker News new | past | comments | ask | show | jobs | submit login
Deep learning may need a new programming language (venturebeat.com)
117 points by bobjordan on Feb 19, 2019 | hide | past | favorite | 124 comments



What about Julia[1]? Seems like a perfect fit[2].

[1] https://julialang.org/

[2] https://juliacomputing.com/domains/ml-and-ai.html


Julia is probably the next best bet, according to the Swift for Tensorflow team - https://github.com/tensorflow/swift/blob/master/docs/WhySwif...


This is exactly what I thought as well. Julia is born for this use case.


Except for the 1-indexing part.


Been wanting to try Julia out for a while, but the 1-based indexing is perennially putting me off. I can't see how I can avoid spending a large chunk of time (and endless frustration) unlearning years of 0-index muscle memory (nor I see why should I, for the sake of adapting to a single language that, at least initially, I will be just toying with).

It was probably a good marketing move on their part, a wink towards the Fortran/Matlab crowd, but it certainly damages the language's appeal for potential converts from C, C++ and Python.


It's not that bad in practice. I wrote Matlab for 12-years and Python for 15-years and seamlessly switch between 0 and 1 indexing.

I know the arguments for 0 vs 1 indexing, but in practice it's not really an issue since they're both just book-keeping. The only time where things might get interesting is for certain complex recursive expressions (recursive in the mathematical sense) where you want your code to match the equations, but in those cases I just create a new offset-index and work off that. (edit: just learned from another comment that Julia supports custom indices.)

Note: caveat -- my use of indices is mathematical. I don't do any kind of low-level stuff.


Then challenge yourself to write some code completely agnostic to the indexing, which will force you to learn some neat features. Doing pointer arithmetic with your bare hands (however much muscle memory they have) is very seldom necessary, computers are pretty good at that stuff.


I thought one of Julia's compelling features is the ability drop down to the level of index arithmetics without losing performance (as Python does, once you start writing manual for loops).

If I want to stick with higher-level construct for dealing with "collections of numbers", I can use numpy arrays in Python, the STL in C++, etc.

Do you think Julia's high-level constructs are much better than the competition? E.g., how do they compare with C++20 ranges?


Maybe I phrased that poorly: you can indeed write the loops yourself (without a speed penalty) as advertised, to do things which are hard/ugly/impossible in numpy. But very often you can do this without dropping all the way down to explicit pointer arithmetic, where you'd have to care about base-0/1 and off-by-one errors (again without penalty). This in-between zone is what I was trying to suggest is worth exploring. See for example https://julialang.org/blog/2016/02/iteration for a taste.

Someone more polyglot than I am will have to comment on C++ comparisons.


You can do some pretty interesting things with indexing in Julia: https://julialang.org/blog/2016/02/iteration


I mean, how much time are we really talking about here?


LoL. It's 2019. There's no reason why start index should be fixed... Treat arrays as optimised hashmaps, and let users pick a default!

Edit: ah, a sibling comment (now deleted) mentions Julia does have some support for arbitrary indices ("offset arrays").

https://docs.julialang.org/en/v1/devdocs/offset-arrays/


sigh: this doesn't matter, this is a library provided feature not a language feature (like Ada) so if you try to use something else than 1-based array you'll find lots of incompatible libraries..


I didn't use offset arrays, but I don't expect lots of incompatible libraries if they implemented the proper array interfaces. Julia has duck typing, and thanks to multiple dispatch they can just specialize the few relevant methods that actually use the new type special properties.

Many of the most used Julia types are defined outside of the standard library, such as StaticArrays (for fixed size arrays) and CUArray (for GPU processing).


You can use any indexing scheme you like now.

Arrays can start at [-100] if you feel like it.

https://docs.julialang.org/en/v1/devdocs/offset-arrays/


It is not typesave and doesn’t really support programming in the large. I think the best bet is something like Swift or Microsoft could come out with a C#/F# compatible thing.


I was curious about Julia, mostly to experiment with the Flux machine learning library. I tried some ‘general purpose’ Julia programming: network IO, consuming RDF data, some natural language processing/text processing, etc.

I was specifically interested in the question ‘could Julia be my general purpose language?’. In reality, I am too much into Lisp languages, Haskell, etc. to seriously want a new language, so this was just a two evening experiment.


> In reality, I am too much into Lisp languages, Haskell, etc. to seriously want a new language, so this was just a two evening experiment.

But what did you think of Julia?


Julia is a very nice language. Still, I would not use it for a large project with many developers: similar situation as Python.


Julia is typesafe as both of c# / swift. Supporting programming in large has no meaning at all.


That is factually incorrect: It offers no static compilation guarantees at all. All it has is some form of duck typing, much like python, there were some discussions on this with Haskell people on the issue tracker that objected to the claim that Julia is dependently typed, when in fact it isn't. Programming in the large has a precise meaning. It is what Programming languages like Rust, OCaml, C#, Java, Swift, Go, Ada were designed for: Languages that facilitate many engineers (100s-1000s) to work on a common project, that requires static types, a sane package or module system and the possibility to separate implementation and interface (go interfaces, swift protocols etc.). None of these features really exist in Julia (in particular no static type system)


I always think these discussions miss the fact that static type systems only give you these benefits if whatever property you're trying to prove is encoded in your type system. E.g. people keep telling me that static type systems will prevent shape mismatch errors at runtime, but then go ahead and give their variables a static type of "Tensor" (without shape information), because encoding that shape information in the static type system is a bit of a pain and nobody really wants to write that (Not always, but it's a surprisingly common pattern).

Meanwhile, we have absolutely no problem statically checking Julia code (and in fact we do when generating code for TPU). It's a bit of a non-standard thing to do and we're working on improving the tooling for this, but putting Julia on the same level as python here ignores a significant aspect of the design of Julia.

As for collaborations among thousands of engineers, it is true that we don't really have Julia projects that large yet (other than Julia itself) and I'm sure we'll undoubtedly find things to improve in the language to support collaboration at that scale, but i really would encourage you to look at things like the new package manager or the documentation system. A significant amount of work has gone into the usability thereof and I think at that point they warrant criticism more detailed than an off hand dismissal.


Yes, it always annoys how much C++ code is "integer typed"..


Julia was created for and by the scientific computing/data scientists folk. It's a very valuable endeavor, I am glad we have another programming language/tool for this field.

What I don't quite get it's the silly insistence of the Julia fanboys to claim Julia is a general programming language, and that as such it should be used for everything. There is nothing wrong with being a focused and discipline specific programming language.


Why is it a silly insistence? What about it is not fit for general programming? Please be specific.


I get that for scientific/numeric applications Julia is probably at a sweet spot and equally well suited if not better suited than many of its competitors (in many ways it feels like Matlab on steroids). Clearly the stuff you're doing with XLA.jl and the ease of implementation of Neural Differential Equations demonstrate the power of Julia's approach to deep learning.

As you probably know it is hard (and in a precise sense actually theoretically impossible) to design type systems that are both convenient to use (have automatic type inference) and incorporate dependent types (such as shapes of tensors). Julia's solution of using multiple dispatch is not a substitute for such a static type system.

Running into an exception, if there is no method for the particular combination of parameter types you have accidentally produced in a 1000 LOC project might be fine, it will make it pretty hard to refactor anything in a larger project. It results in exactly the same challenges that large python / javascript projects have. Those challenges are well documented and are the reason why large companies move away from those languages to statically typed languages.

This is why I believe a statically typed language with good support for numerical computing will eventually win as the language of choice for large production machine learning applications. Leaving (for now) exotic examples such as Neural Differential Equations aside, current machine learning models would probably actually profit more from some of the type safety features let's say Swift has (using structs for example instead of Tensors of certain shapes) than from more comprehensive support of numerical features. For the most part we are talking about very basic linear algebra after all.

The Swift for Tensorflow design documents argue this pretty convincingly as well.

Regarding the package and module system you might be right, but looking at https://github.com/JuliaTPU/XLA.jl/blob/master/src/XLA.jl and comparing this to how you would assemble a user facing module in lets say OCaml https://github.com/janestreet/core/blob/master/src/core_unix... I personally feel like the later gives me a lot more information how I would use the module and a clear separation from the implementation at the same time, similar things can be said about how this would look like in Swift or even a well written C/C++ header. As far as I can tell package management is still roughly at the level of pip in sophistication, not comparable to solutions like cargo or opam.


It's possible to have both the dynamism and static type checking in Julia. Check out this issue: https://github.com/FluxML/Flux.jl/issues/614


exactly this. The module infrastructure of Julia is designed purposefully to be like R which is great for data exploration and statisticians. However, its a horrible architecture for building modular large scale software.


How so, outside of just being a dynamic language? In fact it's probably in the higher end of dynamic languages since the type inference can help tools detect method errors and type instability that cost performance. Multiple Dispatch allows for really good high level abstractions, and the language is really expressive for any domain.


>Python ... the language forms the basis for Facebook’s PyTorch and Google’s TensorFlow frameworks

The state of journalism today ... their Github repo web pages show that there's more C++ than Python in both repos.


To be fair, while that's true I think what's meant here is that Python is the language of choice for using Tensorflow and PyTorch. Probably no other language is used to interface with deep learning libraries and primitives as much as Python.


>I think what's meant here is that Python is the language of choice for using Tensorflow and PyTorch

So it's a good editor that's lacking too, besides the fact-checkers.


This is the reason why swift for tensorflow initiative was started by Google - https://github.com/tensorflow/swift/blob/master/README.md

Facebook will obviously go its our own way. Given their investments in the JS ecosystem, im hoping they end up choosing Typescript for this.


Is Facebook adopting Typescript or are they still full-in on Flow, their own competing typed Javscript alternative?


https://news.ycombinator.com/item?id=18918038

Jest is a Facebook project for testing React. They moved from Flow to Typescript.

https://github.com/facebook/jest/pull/7554

Dont want this thread to go OT. But I do think Typescript will be an awesome counterpart to Swift-for-Tensorflow on the ML side.


Im sorry, but this very much feels like a fluff piece. Yes we _may_ need a new programming language, or we may not.

Half of this article is about hardware, while only a small part of it is about programming languages.

> There are several projects at Google, Facebook, and other places to kind of design such a compiled language that can be efficient for deep learning, but it’s not clear at all that the community will follow, because people just want to use Python

Are the libraries not implemented in native code? The brief mention that python gets does not really detail what is wrong with it, aside from not beeing compiled I guess. But that is no issue if the libraries are native, so I dont see a reason to move away from python, let alone create a new programming language for it


You don't have to check baby's diaper to know they went number 2.

Python ML code is a bit of a joke. If you're any kind of semi professional developer who sees deep learning code for the first time and doesn't say "what the fuck would you do it like that for", I simply don't know what to tell you.

Im not saying a new language will fix this problem but some new primitives and idiomatic options protecting people from themselves probably wouldn't hurt the industry.

Likewise, something more ergonomic might actually improve onboarding new developers into the space.


I agree. Python is fine for short programs, but please give me a type safe language for developing more complex programs. BTW, as an unofficial library, some devs at Google do provide Haskell TensorFlow bindings.

I am not a Swift developer but I have looked at Swift enough to know that I like it better than Python.

Common Lisp is another interesting language: the SBCL compiler’s warnings will help you catch many errors early. So, maybe Python would be much better with MyPy type annotations and JIT compilation.


What do you find offensive about, for example, the following CNN written in Python?

https://github.com/keras-team/keras/blob/master/examples/cif...

"I simply don't know what to tell you." Please try! You can at least provide one example of the jokes you are referring to?


That code sets stack vars and calls out to the actual work.

This is a perfect example of what's perplexing me about the "learning" industry.

https://i.redd.it/00824ycw3hwy.jpg

That code also uses pickle to serialize to disk, which is a gross, irritating pattern I've seen from Pythonistas: just pickle it to disk, it works on my laptop.

Seems like Logo programming, to me.


That criticism is surprising because Python doesn't have stack variables and the code serializes the model in h5-format.


poo-poo, people will insult your sensibilities using whatever language you choose, the "problem" is not them, or the language or its idioms, its you.

If you write ML code for money, you are a pro, the end. Python is not perfect, but everything else is worse, far worse. What is the product that falls out of a ML pipeline ? ML pro-tip: its not hand-crafted artisanal code.


My complaints about ML prettiness were unclear but to clarify it's usually criticism aimed at library authors who often come from very academic backgrounds with very poor sense (lack of exp?) of just how lazy professional developers are. Case in point, you don't care if it comes out as poop because you're getting shit done. Hell it seems like you don't even care if you have to wade in it for another 5 years.

Kudos, but what if i told you if the tooling authors could have ensured you never wrote poop but still got shit done?


The end product is a model.

If you can't describe how that model makes its way into a product and is consumed, then you aren't a pro, the end.

From the product's perspective, all you produce is a configuration file.


The libraries are native code in a similar sense to python being native code. The kernels are native and typically highly optimised but there is a possibility that better optimised kernels can be generated on the fly once the computational graph is known.


>> that better optimised kernels can be generated on the fly once the computational graph is known.

Which is already done by python libraries.


The main problem, imho, is that (for best performance) most libraries require the programmer to create a data flowgraph, and to think in terms of this graph. However, this is the perfect job for a compiler. In mainstrain compilers, dataflow analysis has traditionally been the task of the compiler, so it seems silly to break with this tradition. A new (compiled) language could bring us back on track.


I'd probably argue the opposite. Dataflow is a natural way to think about ML activities. The problem is that no mainstream languages offer dataflow as a first class construct; at best it's a set of libraries. That creates some impedance mismatch between intent and implementation. That compilers already do dataflow under the covers would hopefully mean that implementing a first-class dataflow language wouldn't be too hard.


By that argument, dataflow is also a natural way to think about NumPy/SciPy programs.

However ... that's not really true. Humans prefer to think in terms of operations (consecutive steps) and labels for intermediate results.

The most important situation where this is important is if the dataflow is dynamic. A dynamic graph can make it unclear what is assigned to what, at which point in time, and programming becomes a mess.


What would a language primitive for "dataflow" look like, and what would it offer over an implementation as a library?


There are a few things, nicely covered in "Concepts, Techniques and Models of Computer Programming" by Van Roy and Heridi [0]. Briefly:

1. First-class dataflow variables 2. Syntax for creating flows (c/f pipe in *nix shell) 3. Transparent parallelisation

All of these things exist in libraries of various sorts, but can feel cumbersome compared to native syntax.

[0] https://mitpress.mit.edu/books/concepts-techniques-and-model...


a wire.


I know Nim has some interesting working going on here, https://github.com/mratsim/Arraymancer

As well as that esolang that appeared a few months ago on the front page. https://github.com/mrakgr/The-Spiral-Language


It would be hilarious if everybody suddenly shifted to a Prolog implementation with ML functionality ¯\(°_o)/¯


I like the idea.

There are two types of ml jobs. One for modeling and another for computation. The former deals with the communication with human. The latter deals with the communication with machines. I would prefer that a statistical logic programing language (maybe extended from prolog) for the modeling part. A strong compiler tool or service that compiles it into any machines, on cloud or edge. Most of the time, what human likes to achieve is small and declarative. But for any small modifications of the program, we have to think about the computational flow, deal with the type errors and tune the low level efficiency of CPU and GPU. It's a waste.

I like the settings of SQL where modelers use SQL to mine data while engineers maintain the performance of the computational engines.


More like Miranda. Strongly typed Prolog with a nice compiler


That would be awesome


There's already Nim (https://nim-lang.org/), which is Python like syntax and statically typed, with easy parallelisations etc already. What's lacking is the adoption as it wasn't hyped up. There's rarely a need to create yet another language and wait till it becomes mature and fixed all the issues with the eco system etc. If at all what's required is a way of translating all the Python libs to Nim or some similar effort.


In my opinion, I'd rather see F# or Julia succeed.


Why did everyone dump C++? Oh wait, they didn't.


Depends on the context.

On GPGPU programming certainly not, in fact latest NVidia's hardware is designed explicitly for C++ workloads.

On GUI frameworks, C++ no longer has the spotlight on OS SDKs that it once had.


My point is all AI worth its salt (robots) is C++ based in production.


There I fully agree with you, specially with NVidia doing C++ in hardware.


NVidia is adopting ADA/Spark for autonomous vehicles work [1]. It's service-proven, compiles to C/C++ speeds, and is safe. I learned Turbo Pascal in college in the 80s after Basic, C and assembler in the late 70s, early 80s, and I am attracted to languages like Haskell, Julia, Lisp, and J/APL, yet after toying with Spark, I think it is probably a good fit to do safe, ML at C/C++ speeds. It would be easy to hook it into all the C++ of TensorFlow too.

[1] https://blogs.nvidia.com/blog/2019/02/05/adacore-secure-auto...


Yeah that as well.

I belong to the same fanboy club, which is kind of why I do like C++ and not so much about C.

I just don't see many adopting it without legislation enforcement, which is why the focus is on autonomous vehicles, where Ada already has a good story.


If I am not trying to write verbose, safe code with things like Rust or Spark, I still have a thing for low-level C vs. C++ bloat for fun stuff. I am going to commit this year to Spark, and probably Rust if Spark doesn't work for me, and then C++ after Rust. I have not used C++ in years, so I need to look at the latest and greatest before passing judgement.


I also like Spark, but the 300mb download didn't succeed in getting an IDE which would not immediately crash. It's coming with its own python 2.7 which is at fault here but nevertheless.


I've downloaded the IDE for both my machines, and have not had a crash, since playing with it for over six months. What sparks the crash?


I wonder what he thinks about Julia. There are lots of projects to turn it into an "ML language".


I think it was explicitly designed as an "ML Language" from the ground up already. Do you mean packages like Flux and KNet?


I'd much rather a deep learning system that could handle simplified natural language and file operations, and then grow a network for specific tasks. Surely the point of AI is to liberate humans from having to write everything in code, and especially from needing to learn another language (which he fairly observes few people want to do).

I do everything flow-based these days. It's not the fastest way (and I also have the luxury of not having to please anyone but myself), but it allows me to only think about my domain problem instead of programming language issues.


Maybe not quite what you have in mind, but I have tried saving trained Keras models, loading them with Racket (a modern Scheme language), and implemented the required runtime. My idea was to encapsulate trained networks like ‘functions’ for use in other languages.


And?


and... I think this is a neat idea and I thought other people might enjoy trying it. code (2 repos) is on github


Could you tell a bit more about how you do FBP? I looked at it briefly in the past but haven’t tried it myself.


Let me preface by saying I use it just for API bashing as a researcher, not for deployment of anything commercial or even public-facing. I'm totally unqualified to talk about a production environment.

I came at it from audio synthesis, where modularity and interoperability are priorities and there is relative cooperation between manufacturers and developers on technical standards. I mention this because audio synthesis and the closely associated business of sequencing have a great deal in common with breadboard electronics and super-basic computing like adders, flip-flops, and so on. You can implement simple classic video games like Pong and Asteroids in a modular synthesizer and play them on an oscilloscope if you're that way inclined.

If you find this interesting, I'd suggest Reaktor as the software platform of choice as it's affordable, ahs a large community, and excellent documentation. Flowstone started as audio software and is now aimed at the Robotics industry. It has one of the nicest UIs and allows you to write code directly into modules. I use KNIME for high level data processing because it has an extensive library of database connections/ format translators/ API hooks etc.



Fortran is going to make a big comeback. I can feel it.


It already did kind of, after all it has all the modern goodies (modules, generics, oop) and was supported on CUDA since day one.

One of the major reasons why many researchers never cared for OpenCL.


> Fortran is going to make a big comeback. I can feel it.

It is indeed one of the few existing languages appropriate for generic experimentation in numeric programming.

For example, computing the product of two matrices in C or Fortran is exactly as fast by writing three nested loops or by calling a library function. In python, julia, octave, etc, the difference is abysmal. This is a very sad state of affairs, that forces a mindset where you are allowed a limited toolset of fast operations and the rest are either slow or cumbersome. If you want to compute a variant of the matrix product using a slightly different formula, in Fortran it is trivial change, you just write the new formula inside the loop. But in python or julia you are stuck with either an unusably slow code or you have to write it in another language entirely.

"Vectorized" operations are cool, elegant, and beautiful. But they should not be the only tool available. As differential geometers often resort to coordinates to express their tensor operations, so should programmers be able to.


Have you used Julia recently? Vectorised operations are just as fast relative to hand written loops as c and FORTRAN, you just need to appropriately annotate @inbounds and @simd. Sure that’s more work, but removing safety checks should be explicit.


Not recently, thanks. I will surely try!

My main gripe with the julia interpreter was that it was ridiculously slow to startup (I was using it as a "calculator" from within a shell loop: each iteration spawned a julia to perform a simple matrix computation). Does this performance has improved recently?

By the way, what do you mean by "removing safety checks should be explicit" ? This sounds like a problem that the language should be able to deal with itself without bothering the programer (e.g. if the bounds of the loop are variables of known value, it can be checked beforehand, so the bounds checks can be safely omitted).


AFAIK it was designed for long running processes, not for quick start up time. On the other hand, I don't find it that slow for experimentation on the REPL.


Regarding Julia, check the backup slides (no. 20-29) of this talk by mine:

https://www.slideshare.net/MaurizioTomasi/towards-new-soluti...

Sure, I am not calculating the product of two matrices, but something else. In that case, I proved that Julia's performance is entirely on par with vectorized C++. I am indulged to believe that somebody might produce similar results for the test you are proposing (matrix product).


Can you please provide a download link for the pdf of these slides?


There should be a "Download" button just below the title, which allows downloading the PDF. If it doesn't work, you can find the file "presentation.pdf" in this BitBucket repository:

https://bitbucket.org/Maurizio_Tomasi/adass2018-julia/src/ma...


Thanks! it's a great presentation and it answers many of my questions.

(and no, the download button does not work for me)


There are faster algorithms than 3 nested for loops.


I know; that was just an example. Consider the scalar product of two vectors then (which is a particular case when 2 of these 3 loops only have one iteration).


We may need a programming language for DL, but I doubt it'll happen soon, if it even happens. Lindy effect working in favor of Python here, as many data scientists have prior "big data" experience in it, and typical software engineers from the "scripting/tooling" world.

People have been calling for the phaseout of C/C++, but even today's most popular DL frameworks have backends written in C++ in lieu of Rust.


> Deep learning may need a new programming language that’s more flexible and easier to work with than Python

Whoever has imagination more rich than mine, how can a programming language can be easier than Python? What's difficult in it?

The only thing I find inconvenient in Python is you can't simply put every function (including every member functions of a class - that's what I would love to do) in a separate file without having to import every one of them manually.


Not generally easier but easier to express specific, complex concepts. Tensorflow does alright but it’s certainly not beyond improving.


Read it again. He/She did not imply that Python is hard.

I do believe that he/she meant that they need way higher levels of abstraction than Python.

Let's say Python is current C, they want something X that's like Ruby in compare to current C.


It's just hard for me to imagine a higher level of abstraction. That's why I invite whoever has imagination more rich than mine to suggest ideas.


Have a look at Prolog, that might expand your mind a little.


>The only thing I find inconvenient in Python is you can't simply put every function (including every member functions of a class - that's what I would love to do) into separate file without having to import every one of them manually.

That strikes me as an anti-pattern although you could mimic this behaviour by exposing each one at the package level. Performance is likely to be terrible if you're working on anything sufficiently large.


> The only thing I find inconvenient in Python is you can't simply put every function (including every member functions of a class - that's what I would love to do) in a separate file without having to import every one of them manually.

When you create a .py file, the CPython interpreter treats it as a "module". You can basically treat a Python "module" as if it were a class because the global variables are automatically scoped unless you import them explicitly. You could actually write some code that would allow you to instantiate the entire module in the same manner you would instantiate a class. Though, I have no idea where this pattern would be useful.


Yes. Deep Learning definitely needs a new programming language. Uber's Pyro is worth checking out. (https://eng.uber.com/pyro/)


Umm that's not a programming language as such, it's a probabilistic programming DSL built on top of Python/Pytorch. It's one level of abstraction up.


I’m not the biggest Python fan, but I can definitely tolerate it, as long as I have a decent way to use the models from another language.


i always thought Haskell would be amazing for this task.

but python is really popular and easy to grasp. you'll end up writing a new programming language and people would still use python in the end.


I thought that language was Julia.


More likely a markup language


sql


1. Features 2. Performance 3. Usability

You can pick only two options;


Julia fits the bill quite well in my opinion:

* features: you have an extensive standard library, macros, high level constructs (like generators, asynchronous programming, ...)

* performance: julia is based on LLVM and as long the code is type-stable and avoids memory allocation in hot loops, the performance is similar to C or Fortran.

* usability: you can also program at a very high level like you would do in matlab or python/numpy.


I really hope it takes off as it seems very promising.


Or use Rust to get all 3!


I'm sorry, but Rust certainly doesn't score high on usability. Compare e.g. the "Guessing Game" intro from the official docs, just look at how much more complicated both the code and the tutorial is as compared to how you would do the same in Python, heck even in Fortran:

https://doc.rust-lang.org/book/ch02-00-guessing-game-tutoria...


I don't think comparing 100 line programs is a good benchmark for usability. Most software is much larger and the design choices you have to make for a programming language should favor programs with at least a few thousand lines of code.


Then try to do a few thousand lines of code in Gtk-rs, including custom widgets, the issues with borrow checker and internal mutable struct data accessible to callbacks is what lead to the creation of Relm.


The GUI story in Rust is currently terrible, I agree. But writing GUIs in Python with wrappers to C(++) libraries is not exactly nice either ;)


To me it looks pretty nice.

https://www.qt.io/qt-for-python


The final code is longer because Rust's standard I/O is a lot more minimal than Python's, Fortran's, and even C's. If you use a library like [text_io], which provides the equivalent of scanf, then I don't think that it'd be any longer.

[text_io]: https://crates.io/crates/text_io


But surely, having a sane way to do I/O in the standard library is one of the traits included in "Usability"?


Why does it have to be in the standard library when adding it as a dependency involves adding just a single line? Since (a) there's no one true way to do input parsing, and (b) most projects don't need to interactively ask for input, it doesn't sound like something that belongs in a standard library.


They still need a bit of work to improve the 3., specially against GC languages.


Honestly, I think Swift on Tensorflow fits the best for this case right now. The problem being that they seem to be integrating Tensorflow directly into the compiler (correct me if I'm wrong) which is going to be impossible to upstream and just going to lead to a Google-derived Swift fork.


You could use D for all three - https://github.com/Netflix/vectorflow


From the examples there, we find e.g. this nugget:

  return bigEndianToNative!T((cast(ubyte*)&b)[0..b.sizeof]);
Do you really feel this is a language that scores high on usability? Because it doesn't look like it.


It's basically a reinterpret_cast, it's dirty and should look dirty, even if it's necessary in this case.


Why is it necessary to do something that looks dirty? To me, "usability" means I shouldn't have to care about endianness, pointers, all the superfluous mental overhead of C et al. If you program in Python, at best you need to care about whether something is a float or int (or string if read from file). In recent Python versions we no longer have to care if a string is unicode or not (they're all unicode now); that is precisely progress in usability.


1) Define usability? Syntax? 2) Which language do you find "usable"?


This guy - or the article's author - hasn't heard of DSLs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: