Did you verify this through disassembly? These loops can still be replaced by a closed form expression and I wouldn’t be surprised if LLVM figured this out.
Sooooo.... anyone know a compiler that actually does the closed form tactic on the loop(s)?
If I see correctly, in theory the program could be compiled down to something that finishes nearly instantly, with no loops at all?
If it turned the inner loop into a closed form expression, I would expect the outer loop to go through 10k iterations a lot faster than needing half a second.
Running the C code through Godbolt with the same Clang and -O3 optimizations it looks like you're right, there are still 2 loops being performed. The loops are unrolled to perform 4 iterations at a time but it's otherwise the same. Other than that it didn't do too much with the loops. Hilariously there is a god awful set of shifts and magic numbers... to save a single modulo operation from running once on the line declaring 'r'.
For me it means I can fork the repo and start hacking on the code immediately, and it will have reasonable quality.
With C++/Python and even Node I often find myself wasting half a day just getting it to build.
Yep. If it's a Python project, it's about a 60% chance it won't run on the first try after a fresh clone.
When I see a CLI tool written in Rust or Go, it usually just works out of the box without having to mess around with godawful pip environments or conda.
I say this as someone who cut their teeth on and loves Python for a thousand reasons, I have to agree. Python projects are abysmal. Rye and UV are promising (and I am very excited about them), but they aren't quite ready yet.
"Written in Rust" carries with it significant promises that only Go also has. (Go has a lot of the same promises, for having good tooling and the same mostly-statically-compiled philosophy.)
"Written in Rust" tells me a project is easy to install and easy to hack on. I am far less interested in using non-Rust projects, and I am definitely disinterested in making code contributions to non-Rust projects.
Case in point: It took me much longer to write this comment than it took to install and use marmite.
This. It's basically about average quality. It's the old Python Paradox all over again. Javascript / Typescript is the bottom of the barrel in terms of quality. Python / C++ is higher than that. And Rust is at the top.
The distance between two uniform random points on an n-sphere clusters around the equator. The article shows a histogram of the distribution in fig. 11. While it looks Gaussian, it is more closely related to the Beta distribution. I derived it in my notes, as (surprisingly) I could not find it easily in literature:
Correct. I was too short in my comment. It's explained in the article: without loss of generality you can call one of the two points the 'north pole' and then the other one will be distributed close to the equator.
Pick an equator on an n-sphere. It is a hyperplane of dimensions (n-1) through the center, composed of all but one dimensions of your sphere. The xy plane for a unit sphere in xyz, for example.
Uniformly distribute points on the sphere. For high n, all points will be very near the equator you chose.
Obviously, in ofder for a point to be not close to this chosen equator, it projects close to 0 on all dimensions spanning the equatorial hyperplane, and not close to 0 on the dimension making up the pole-to-pole axis.
My first thought is that it's rather obvious, but I'm probably wrong, can you help me understand?
The analogy I have in mind is: if you throw n dice, for large n, the likelihood of one specific chosen dice being high value and the rest being low value is obviously rather small.
I guess that the consequence is still interesting, that most random points in a high-dimensional n-sphere will be close to the equator. But they will be close to all arbitrary chosen equators, so it's not that meaningful.
If the equator is defined as containing n-1 dimensions, then as n goes higher you'd expect it to "take up" more of the space of the sphere, hence most random points will be close to it. It is a surprising property of high-dimensional space, but I think it's mainly because we don't usually think about the general definition of an equator and how it scales to higher dimensions, once you understand that it's not very surprising.
> The analogy I have in mind is: if you throw n dice, for large n, the likelihood of one specific chosen dice being high value and the rest being low value is obviously rather small.
You're exactly right, this whole thing is indeed a bit of an obvious nothingburger.
Exactly, and this is in my experience what most Rust code ends up looking like.
It compromises a bit on generality and (potential) performance to achieve better readability and succinctness. Often a worthwhile trade-off, but not something the standard library can always do.
If you make your batches small, you can get pretty much all of the benefit without adding (appreciable) latency. e.g. batch incoming web requests in 2-5 ms windows. Depending on what work is involved in a request, you might 10x your throughput and actually reduce latency if you were close to the limit of what your database could handle without batching.
Why should we weigh by the number of different digits? I.e. is there an argument why the cost of a single digit is linearly proportional to the number of values it can hold?
To me this seems like the weak point in the argument.
It’s also the only high production game with native MacOS & M1 support I could find.
I got the early access just to see what M1 GPU can do and was impressed.
This is essentially the Birthday paradox. You need a quadratically better false match rate to deduplicate than to authenticate.
It’s also why Worldcoin went with Irises (highest entropy among biometrics), custom hardware, custom optics and an in-house trained algorithm.
The Iriscode (if it's the same as Daugman's iriscode) has tested so far to have 249 bits of entropy, or over 10^74 combinations [1]
So even with the birthday paradox you'd need 10^37 people before having a good chance of a collision, which is rather more than we are likely to have in the next few centuries.
Of course, it's possible that there are some subpopulations who don't have this amount of entropy in their irises, most obviously the small number of people who have a birth defect such that they are born without eyes.
This is a really good comment. But of course the problems in authentication and de-duplication are also different in that you care about adversarial false-positives much more once authentication is the goal. As I understand things, Worldcoin claims the retina scans won’t be used to control funds (or other services.) I am skeptical that many of those users will retain their non-biometric wallet credentials long-term, which will leave you with a database of biometric credentials that will have to be used for authentication if you want to use those credentials for anything important in the future.