The Machine-Learning world, especially "Google Brain" research team figured out that NumPy was an awesome piece of software for dealing with large arrays of numbers and matrix multiplication. They built "TensorFlow" on top of it around 2015 which became very popular. Facebook followed suit and released PyTorch in 2016.
IPython/Jupiter notebooks (for Julia, Python and R) from 2015 were another factor, also adopted by the AI/ML community.
The alternative data-science languages at the time were Mathematica, MATLAB, SAS, Fortran, Julia, R, etc, but Python probably won because it was general purpose and open source.
I suspect Python would not have survived the 2/3 split very well if it wasn't for AI/ML adopting Python as its main language.
> when the tooling was so inferior
Since 2012, Conda/Anaconda has been the go-to installer in the SciPy/NumPy world which also solves a lot of problems that uv solves.
In Indonesia, many Javanese people traditionally have only one name. When they migrate to countries like Singapore, where a surname is required, they often use their given name as both their first and last name. As a result, you may see names such as Chandra Chandra or Supardi Supardi.
Nowadays the Indonesian law requires at least two words for the person's full name, and the full name is regarded as a single entity no matter how many words it contains. Older generations of Javanese, having only a single name, usually duplicate it - I have seen such duplicate names in official Indonesian documents.
Or you get something like Someone FNU. Where FNU at some point meant 'first name unknown', but is now the person's last name at least in the people I've known... But I see references to it ending up as their firstname, so FNU Someone, which would make a little more sense, but is still pretty bizarre. Someone Someone seems like the best way to handle it, assuming single names aren't allowed (because yeah)
Only when used in a naïve way, which Rust does not. For example, the increments/decrements are done only when "clone" is called and scope exit respectively, and based on Rust ownership/borrow checking, is rarely done combining the best of both worlds (but yes, implementations with aggressive increment/decrements in loops and on every function call can be very slow). Rust also separates Arc (atomic refs) and Rc (non-atomic refs) and enforces usage scenarios in the type checker giving you cheap Rc in single threaded scenarios. Reference counting when done in a smart way works pretty well, but you obviously have to be a little careful of cycles (which in my experience are pretty rare and fairly obvious when you have such a data type).
It's how often reference counts are adjusted on hot paths that matters (including in libraries), and back to the original point, reference counting doesn't let you free groups of objects in one go (unlike a tracing GC).
Also it'd be nice if the reference counts were stored separately from the objects. Storing them alongside the object being tracked is a classic mistake made by reference count implementations (it spreads the writes over a large number of cache lines). I was actually surprised that Rust doesn't get this right.
Another issue with manual memory management is that you can't compact the heap.
The amount of reference-counted pointers in most Rust code is a tiny fraction compared to boxes or compiler-tracked-lifetime references.
Yes in theory it would be more efficient to store all the reference counts together, but that's in theory. In practice most Rust apps will not call clone on a shared pointer on a hot path and if they do it's usually 1 such pointer and they do something with the data as well (so it's all 1 cache line anyway)
You can't compare Rust/C++ with Swift/Nim when it comes to RC, there just aren't enough reference count operations for it to matter much (unless you're in a shitty OO C++ codebase like me that pretends it is java with std::shared_ptr everywhere)
Apps where heap compaction would be relevant in a low-level language like Rust or C++ will typically use a bump allocator which will trounce any kind of GC.
Anyone claiming something like this obviously hasn’t dig into GCs. You honestly think that writing into memory at each access, especially atomically is anywhere near the performance of a GC that can do most of its work in parallel and just flip a bit to basically “having deleted” everything no longer accessible?
Tracing is batched up in GC pauses, rather than on every access as with naive RC. It is necessary to stop the world, but the work done in the pause does not need to use atomic operations.
Atomics are handy in a parallel/multi-core tracing collector, but IME pointer chasing in tracing somehow manages to cover the time it takes to do atomic operations.
The goal was to write a network driver in several languages. Nobody said anything about comparing memory management techniques, nor would the Swift implementation use a stack allocator anyways.
I don't really have the skills to do an accurate comparison across many languages, and if I pick just three or four it's going to get nitpicked to death for being cherry-picked. Honestly, I generally think studies of this kind are mostly doomed to failure, or invariably converge to someone trying to encode x86 intrinsics in Rust.
Swift’s value types have reference counts because they may have members that need their lifetimes to be managed appropriately. (For example, if they’re reference types.)
This is incorrect, value types are not referenced counted in Swift. If a value type contains a reference type member (usually an anti pattern!), then that member’s reference count is indeed incremented when the value type is copied. But it is not accurate to claim that value types themselves have reference counts.
Yes, fair enough. I was thinking of this more from the perspective of types that have internal reference types that are opaque so it’s effectively like the value type itself having a reference count on it, but yes, really it’s the internal storage that is getting the count.
This isn't the kind of program Swift was designed to perform well for.
Nor is wallclock speed even what the system should be optimizing for, since you buy phones to run apps not to run the system. You should be measuring how well it gets out of the way of the important work.
"From its earliest conception, Swift was built to be fast. Using the incredibly high-performance LLVM compiler technology, Swift code is transformed into optimized machine code that gets the most out of modern hardware. The syntax and standard library have also been tuned to make the most obvious way to write your code also perform the best whether it runs in the watch on your wrist or across a cluster of servers.
Swift is a successor to both the C and Objective-C"
It usually isn't. Reference counting is good when you need to rather tightly limit memory use, are wedded to RAII, or need something simple enough that a programmer can knock it out in an afternoon.
reply