It's not so much an insistence, more just where the industry is at. Those freewheel hubs are loud because that's generally the lightest, most efficient design - lots of teeth for immediate drive engagement, no extra grease that would lessen said engagement, no extra material for sound dampening.
Clutch mechanisms are available, but these are slightly less efficient and generally more expensive.
Basically... People love spending lots of money for the absolute highest quality components. How will other riders know how fancy your hubs are unless it broadcasts that fact to the world?
Photopea is fantastic, fast, looks and works exactly like Photoshop, and maintains some features removed from Photoshop, such as the ability to generate normal maps.
The data availability statement in this article undermines its entire premise.
> We did not publish our raw data along with the manuscript because it could be understood that we are publicly shaming authors who did not want to share their data. As for the raw data that were received during this study, we informed our study participants that those raw data will be deleted after being examined and that all data and communication will be treated with strict confidence.
There's a big difference between explicitly denying to share data for justified ethical reasons, and saying you will share the data and then failing to follow through.
All of these have some exceptions, but are for the most part true:
high energy physics
lots of computational bio
fluid dynamics
rendering
In general, python and matlab really struggle in problems where you want maximum performance, but the most efficient algorithms aren't vectorized. In some cases, this is solvable by writing python libraries in C/C++, but especially in scientific fields, the end users are often the same people writing the algorithms, so they don't gain much from a python library if they have to do all the hard work in C++ anyway. Julia gives them a way more productive dev experience while still having a good user experience.
indeed, the grants (in the millions $) given to rewrite C++ or Python just to handle array (because for loop sucks) and ends up making monorepo blobs is jaw dropping -- while it's almost free in Julia, with 1/100 of the line of code and more flexibility we can match and beat even C++ code...
Interesting article on the history of Perl for bioinformatics.
As someone who is currently a bioinformatics PhD student, Perl has become the vinyl of programming languages. While some older folks script with Perl, most packages I’ve seen published recently use either R, python, c++, and more recently, rust.
I’ve been working with biology data. A lot of our older scripts are Perl. It’s really good at this stuff. The great thing is the language is super stable so we don’t have to rewrite them usually.
The Perl scripts I wrote 20 years ago are still running on modern systems today. It's almost scary how stable the language and interpreter have been.
Turns out it's not an accident, though! There were a lot of good decisions that kept the language stable, and test frameworks and code coverage have been a critical part of Perl and its modules since the 1980's. In fact, pretty much all of CPAN gets tested on a matrix of Perl versions and operating systems (ex: http://www.cpantesters.org/distro/N/Net-Amazon-EC2.html)
i remember silly java consultants rabbiting on about TDD and agile while dismissing oss.
meanwhile, the curation of both testing and documentation as well as overall code quality on CPAN was light years beyond the best corporate code i've ever seen.
i'd argue that perl (with CPAN) was the first internet native programming environment.
Yes, I also was there in those years. Perl snippets were shared by everyone, but almost nobody knew how they work, much less how to tweak them to adapt to new scenarios or how to contribute to a library. I remember a snippet that read a FASTA file and counted the GC%, but it had an error somewhere that made it fail when it reached the 1001 sequence. Nobody was able to find the error, they just splitted the FASTAs in smaller files. Until someone wrote a less esoteric Python script, and the Perl snippet died.
Might be unrelated, but Perl does have a hard limit on recursion within the same function; after 100 recursions it'll die with an error and you have to unroll your loops.
$ perl -Mwarnings -E'sub f { $c++; say $c if 0 == $c % 1_000_000; f() } f'
Deep recursion on subroutine "main::f" at -e line 1.
1000000
2000000
3000000
4000000
5000000
6000000
7000000
8000000
9000000
10000000
11000000
12000000
13000000
14000000
15000000
16000000
17000000
18000000
19000000
20000000
Terminated
You get a warning after 100 calls which almost always indicates a bug. In case it's a genuine deep recursion, the warning can be easily suppressed with `no warnings "recursion"`.
On my computer, the program continues to run for about 10 seconds, consuming 9 GB virt./res. after which it is killed off by `earlyoom`.
Source: https://bicycles.stackexchange.com/questions/4606/what-cause....