This is one of the nuggets out of it. I have been using Python with scipy and more extensively with it's sci-kit library since 2014. Started with it by solving discrete traveling sales man problem for solution of sitting arrangements and was astonished with its ease of use of matrix algebra with these libraries and got hooked to it. It helped me to do scientific computing and create a user friendly user interface on web to help end-user to just focus on siting arrangement and conference Organization. Can solve it with Matlab or GNU octave, but couldn’t have developed easily the whole system with user interface in them.
If Swift wanted to be used in scientific computing it needs to follow the similar policy of managing its scientific stack.
Julia is good for scientific computing, but it’s UI and web development libraries are isn’t that powerful. So will see how it develops.
I constantly see on HN criticism of Python and something rewritten in Rust as something revolutionary or ground breaking that it will change the whole world. Earlier this fad was with go language. Probably in time to come Rust will be used for some core library development which brings substantial benefits like scipy, not something rewritten as wrapper around C library with unsafe code which can be done with Python much better.
Python isn't criticized as much as it is recognized as the lesser tool for certain categories of development. That's fine because it is a general purpose language with many great uses.
You ask me as if I don't know what I'm talking about-- as if I am not well acquainted with the rust-analyzer grinding away compilation in the background, slowing down the machine and interrupting workflow?
Rust's Achilles heel is compilation. It is a hard enough problem to solve that it won't ever reach a level that it can successfully run in the background of a REPL workflow and offer a user experience comparable to that of Python. The user experience is the reason that Rust will not ever present a viable alternative. Just as with Python, this is also fine. Rust isn't the right tool for every job.
GHCi uses an interpreter, with ability to load already compiled code into the REPL.
The main innovation of recent programming languages is the use of social media for marketing and PR by their companies to promote them.
For some languages I think yes, others no. Rust in particular has a memory management philosophy which was basically novel when it came out; it existed in research papers and such but not very many people were using those languages to do mainstream software engineering. It’s a big deal.
And thanks to Rust, the designers of Swift, D, Chapel, C++, Ada, OCaml, Haskell, and maybe even C# and Java, are now taking linear types into consideration.
So for many deployment scenarios, GC + a linear types subset might be good enough, still one can thank Rust for those improvements even if indirectly.
Only time will tell, right now don’t see any real benefits. I do agree though rust is a good attempt to try. Right now majority of the rust libraries depend on underlying C/C++ with unsafe code. Will watch how it will slowly replace Firefox use of C++.
Will wait and watch when rust will replace LLVM written in C++ with something completely written in rust. Although they replaced OCaml part with rust already so need to see when they can replace C++ dependency which is LLVM.
(My opinion might be biased as rust code reminds me of 1990’s and doesn’t look any different than C++, I like Swift syntax more)
However, I think it's disingenuous to claim that either "a majority of the rust libraries" use bindings like that, or to suggest that Firefox is the primary project worth watching that uses Rust. As mentioned elsewhere in this thread, Rust articles and code examples fairly routinely show up here on HN. It's surely not complete, and any programming language or framework is an exercise in trade-offs, but from hacking on bare-metal or microcontrollers to web GUIs with interactive physical simulations, Rust-only or Rust component code is showing up in a number of domains and production environments. A lot of this is really seems to stem from a safety-by-overall-design rather than safety-by-specific-implementation approach, which seems to resonate well with both experienced developers and newcomers alike.
Edited for spelling/grammar
Firefox which is the reason for origin of rust, cannot yet be completely written in rust and will continue to be dependent on C++ for sometime. I feel like Python, Rust might become more popular once enough hardware and core libraries move away from C/C++ to Rust. It is still a decade or two away.
So I still feel Rust needs another decade or two to be able to claim as C/C++ replacement, not at present.
This is exactly the problem Swift faced to replace Objective-c, in Apple eco-system where everything controlled by Apple. Rust has much bigger hill to climb to be really useful systems programming language.
Meanwhile, the other programmers I follow share what they've built as in what their code _does_ and why that makes them giddy. (C#, Python, JS,...)
In C++ conferences, the talks are about how to do data structure X in C++, on to do certain algorithms in C++, build systems, meta-programming tricks, and so forth.
In other languages, the talks are about how product X was built in language Y.
This is a very key point. I still find that Matlab often has a larger set of features and better efficiency for linear algebra. But thanks to Scipy, you really can expand your mind about what kinds of systems to interface with.
A scipy-copyleft for such GPLed components would be nice though.
As a developer using the language this is great: I can write using clean modern approaches* and still use older packages. (Making this all continue to work is a burden on the implementation developers of course.)
* not to argue which modern features are clean or not -- this is about SciPy.
Edit: to elaborate, each committee would be a vertex and a committee-vertex is connected to another vertex if one or more of their members overlap. Once you’ve got such a graph you simply have to a assign a color (or number/letter/anything) to every vertex in the graph, with the requirement that no neighboring (as defined by being connected by an edge) vertexes are allowed to have the same color. You also try to use the minimum amount of colors needed. Then you can safely schedule the committees with the same colors for the same timeslots, and be sure that everybody will be able to attend.
Here is a grand tour of scipy as a jupyternotebook me and a colleague used as intro to scipy for PhDs in astrophysics:
You can try it without installing the software by clicking on the binder button.
I do not see it that way. A whole generation of students is growing with the wrong impression that writing a "for" loop is inevitably inefficient. Also, they believe that it is OK for large arrays of numbers not to be a core language construct, requiring an external library like numpy. These two incorrect beliefs are incredibly damaging in my view.
In a sane programming environment (e.g., with jit compilation), writing a matrix product using three loops should be just as efficient than calling a matrix product routine. The matrix product should also be rightly available without need to "import" anything.
On the other hand, and I know is a little niche application, consider writing code for execution on GPUs. Now it's great you are capable of expressing your algorithms using the language of matrix and vector operations and can forget about moving memory in and out of the GPU, threads, blocks, warps and other arcana.
I think that what you call "a sane programming environment" just means "an easy programming environment for me". I don't mind at all having to import libraries. Have you ever tried to structure a big Matlab project?
So basically I agree you with about having fast for loops and disagree on everything else.
Do you want something fast using for loops and other primitives? Don't use python!
Do you want something with great stats and easy graphing but little else? Use R or similar.
Do you want all the benefits of python but want to compile the hot path? Use bindings/cython/any of the numerous inline voodoo python tools.
> The matrix product should also be rightly available without need to "import" anything
Why? Even the python stdlib requires imports. The bare language needs keywords and primitives, that's it. Any matrix operation code that loads without import is more overhead.
It sounds like you want a DSL * . Use matlab/octave/R/SPSS. But don't be shocked when you wanna wrap a gui/webpage/api around your code and it's painful.
* DSL in the loosest sense, a specialized, less general language
And if a matrix product operation were available to a language as a standard construct, wouldn't it be every bit as opaque, encouraging students to think that hand-written for-loops aren't as effective as using black-box things?
"""Run length encode array."""
previous = sequence
count = 1
out = 
for element in sequence[1:]:
if element == previous:
count += 1
previous = element
count = 1
diffs = np.concatenate(( np.array((True, )), np.diff(sequence)!=0))
indices = np.concatenate((np.where(diffs),np.array((sequence.size, ))))
counts = np.diff(indices).astype('uint16')
values = sequence[diffs].astype('uint8')
return np.rec.fromarrays((counts, values),names=('count','value'))
Note: the numpy version is missing the import.
Note2: I tend to prefer using numba than numpy. Yet, sometimes numpy is inevitable, especially for linear algebra routines. In that case, I am in a world of pain because numba and numpy do not interact well at all!
Can you please elaborate on this? Not challenging you but trying to understand the nuances.
Numba tutorial says otherwise:
"Numba likes NumPy functions"
Gave a little presentation on my findings on using different approaches
That's cool and fun to know, and some people may enjoy writing optimisations like that (and we def need those people) but so many people just want to crunch numbers and make graphs. They should just call the vectorized library functions and get on with their research.
(P.S. Python has some JITed implementations and also has a built in matrix product - @)
Compared to the previous 2 generations of Matlab students, Python/scipy is a night-and-day improvement.
(4, 4, 4) (5, 5, 5) (32, 32, 32) (33, 33, 33) (256, 256, 256) (257, 257, 257) (512, 512, 512) (513, 513, 513) (1024, 1024, 1024) (1025, 1025, 1025)
–––––––– ––––––––– ––––––––– –––––––––––– –––––––––––– ––––––––––––––– ––––––––––––––– ––––––––––––––– ––––––––––––––– –––––––––––––––––– ––––––––––––––––––
:naive 0.0 0.0 1.3e-5 2.0e-5 0.0114 0.0133 0.0942 0.106 3.25 2.39
:tiled 0.0 0.0 2.7e-5 2.2e-5 0.0139 0.0121 0.154 0.101 1.25 0.888
:fastf77 0.0 0.0 8.0e-6 8.5e-6 0.00543 0.00563 0.0426 0.0445 0.437 0.448
:blas 4.5e-6 4.0e-6 1.9e-5 2.1e-5 0.000972 0.00109 0.00712 0.00744 0.0582 0.0607
Obviously OpenBLAS is so easy to package that it's not really worth avoiding it, but it was very eye-opening to see just how easy it is to get within an order of magnitude (easier, in fact, than getting into the 10x-20x range).
: 8-core 3.4GHz Haswell i7 with 32kB L1, 256kB L2, 8MB L3, and 8GB RAM.
Well, the Python array is good for most of the use cases. Adding other types to the core language could be useful, but then you get into "which type would be better for me"? Somebody else then would say "but what about a sparse matrix, etc"
I think it's ok to look to numpy for those things.
> writing a matrix product using three loops should be just as efficient than calling a matrix product routine
Well, yes, but actually no.
Because of code vectorization, the loop is going to be more inefficient. Yes, compilers are getting better, but they can't do miracles (especially in languages like C)
By using matrix products you're explicit in your desired end result. This can then be optimized by the libraries and you don't have to worry about all the tricks behind it.
The reason python is so amazing in this regard is because that has all been done for you on some level and you just need to use them. Sure, could the numpy/scipy interfaces into lower-level code be more closely aligned to plain for loop implementations of the algorithms they represent? Perhaps.
Python added the ellipsis notation, used in arr[..., 0], as a result of feedback from the Numeric project in the 1990s (Numeric is the precursor to NumPy).
So while the implementation of "large arrays of numbers" is not part of core Python, the syntax of one way to index large arrays of number is.
FWIW, I agree with others - there's no way that a matrix product using three loops will be as efficient as calling a matrix product routine. Efficient versions of the latter also consider memory access patterns and cache use. The example tiled version at https://en.wikipedia.org/wiki/Matrix_multiplication_algorith... uses 6 loops.
On the other hand, in C++, hand-rolled matrix multiplication is both slower and an order of magnitude less accurate than MKL (or possibly OpenBLAS too).
On the other hand, notice that scipy is in large part an interface. The underlying solvers (e.g., numpy.linalg.solve) are often routines written in Fortran several decades ago, and already duly cited and academically recognized.
I'm ignorant of numerical computing, so I have to ask: is Fortran still used because it is that much faster than C or any other specialized language? I assume that is the case because I'm sure someone would have rewritten those routines by now otherwise.
> SciPy has a strong developer community and a massive user base. GitHub traffic metrics report roughly 20,000 unique visitors to the source website between 14 May 2018 and 27 May 2018 (near the time of writing)
SciPy 1.0 was released towards the end of 2017, so that timing makes sense for a 1.0 retrospective. Not sure why it took so long to get published.
(Largely forgoing peer review as a scientific field is, however, not without problems. To say the least.)
It took us a long time (too long!) to construct that manuscript. Like SciPy itself, it was written primarily in the spare time of contributors with full time jobs. Glad to see it out there, though!