Hacker News new | past | comments | ask | show | jobs | submit login

For that reason I generally motivate not using python to people on grounds of correctness rather than performance as per se. I can't be bothered to explain how compiler optimizations work so I usually just resort to "it's magic" when it comes to performance.

Going from a statically typed language to using python to do number crunching genuinely makes me want to vomit. I don't understand how people convince themselves that it's productive to write so many tests and (say) check types at runtime etc.

I actually love Python's syntax but the language design seems like it was thrown together over a weekend.




Take a look at Nim if you haven't already -

It has a Pythonesque syntax, but strong static typing with extensive user control of semantics you seem to care about; e.g. floats and ints do not convert automatically unless you explicitly "import lenientops"[0] ; You can define 'operational transform' optimizations (such as: c <- c+a\b converts to multiply_accumulate(c,a,b) - which is a big performance difference for e.g matrices) that will be applied by the compiler so that your code shows what you mean (c=c+ab), and yet the compiler gets to compile the efficient version (using the relevant BLAS routine)

It's young, but has the best FFI for C,C++,Objective-C or JS you'll find anywhere on one hand, and already a good deal of native implementations, including e.g. a pure Nim BLAS that is comparable to within 5-10% with the best out there (including those with carefully hand optimized ASM kernels).

[0] https://nim-lang.org/docs/lenientops.html


I’d second that. I’ve been doing a bit of algorithm/NN stuff in Nim this year since it simplifies the “deployment” phase and it’s been surprisingly handy despite the limited libraries as there’s a C library for everything. It has a few rough edges but manageable.

It has the spirit of Python 2 but written by a compiler geek (in the good sense). If I were to write an HFT it’d be tempting to use. The new default GC is reference counting but without using atomic ref counts.

P.S. thanks for the lenientops tip


I am involved with the foundation that runs the D programming language so I've found my home.

Other than syntax obviously.

FFI: Can nim use a C++ class and vtables? D does it all the time, nearly every language proclaims to have the best C++ interop but only D seems to be actually able to do it. Templates, classes, structs, and more all work.

We also have Mir, which I haven't benchmarked for a while but was faster than OpenBLAS and Eugene a few years ago and was recently shown to be faster than numpy (the c bits) on the forum.


> FFI: Can nim use a C++ class and vtables? D does it all the time, nearly every language proclaims to have the best C++ interop but only D seems to be actually able to do it. Templates, classes, structs, and more all work.

Well, it depends on how you define "best". Beyond an ABI, FFI is obviously a function of the implementation rather than the language per-se.

The main Nim implementation can use C++ as a backend language, and when it does, it can use any C++ construct very easily by way of the .emit and .importcpp directives. For sure, classes, structs, exceptions all work, and IIRC templates do too (although you might need to instantiate a header yourself for each concrete type or something .... haven't done that myself). This implementation also means that it can use any C++17 or C++20 construct, including lambdas and friends. Does D's C++ interop support C++17? C++20? Can you guarantee it will support C++27? Nim's implementation already does, on every single platform you'll be able to use C++27 on (as long as C++27 can compile modern C++ code; there had been backward incompatible changes along the C++ history).

You can't just #include a C or C++ header and call it a day; You need to have a Nim compatible definition for any symbol (variable, macro, function, class, ...). There are tools that help you and make it almost as easy as #include, such as nimterop[0] and and nimline[1], and "c2nim" which is included with the Nim compiler is enough to generate the Nim definitions from the .h definitions (though it can't do crazy metaprogramming; if D can do that, then D essentially includes a C++ compiler. Which is a fine way to get perfect C++ compatibility - Nim does that)

But Nim can also do the same for JS when treating JS as a backend.

And it can basically do the same for Python, with nimpy[2] and nimporter, generating a single executable that works with your installed Python DLL (2.7, 3.5, 3.6, 3.7) - which is something not even Python itself can do. There was a similar Lua bridge, but I think that one is no longer maintained.

> We also have Mir, which I haven't benchmarked for a while but was faster than OpenBLAS and Eugene

There's quite a bit of scientific stack built natively with Nim. It is far from self-sufficient, but the ease with which you can use a C library makes up for it. I haven't used it, but Laser[3] is on par with or exceeds OpenBLAS speedwise, and generalizes to e.g. int32 and int64 matrix multiplication; Arraymancer[4] does not heve all of numpy's functionality but does have quite a few nice bits from scikit-learn, supports CUDA and OpenCL, and you can use numpy through nimpy if all else fails. Also notable is NimTorch[5]. laser and arraymancer are mostly developed by mratsim, who occasionally hangs out here on HN.

D is a fine language, I used it a little in the D1 days, and it was indeed a "better C++" but did not deliver enough value to be worth it for me, so I stopped. I know D2 is much better, but I've already found my better C++ (and better Python at the same time!) in Nim, so I haven't looked at it seriously.

[0] https://github.com/nimterop/nimterop

[1] https://github.com/sinkingsugar/nimline

[2] https://github.com/yglukhov/nimpy

[3] https://github.com/numforge/laser

[4] https://github.com/mratsim/Arraymancer

[5] https://github.com/sinkingsugar/nimtorch


>Going from a statically typed language to using python to do number crunching genuinely makes me want to vomit. I don't understand how people convince themselves that it's productive to write so many tests and (say) check types at runtime etc.

Totally agree, although I think strong typing is also important, not just static typing. C/C++ has a weak type system because of the automatic conversions it allows.

The problem with this discussion however is that the people that push for dynamic languages usually do it because it's "so fast" to do something in Python, and they won't test their software anyway.


Python is great for small programs that barely need any tests because you can tell it is correct by reading the code or running it until you get the right answer. when I have to maintain a large python program I agree, it is too easy to miss some error condition that doesn't work at all (anymore)


It made more sense before the time of IDEs and code completion. I used to greatly prefer python to c++ or java when I actually had to type full names and look up interfaces like a pleb.

Now I can't stand untyped python. Have to actually look at documentation to see what methods a class has, who has time for that?

Writing golang with tabnine is otherworldly. It's so regular, not just the syntax, but the variable naming conventions, and course structure even, that it feels like loosely nudging the autocomplete in the direction of the goal.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: