It does give some insight into what you seek, at least. For example, “We find that for smallern≲262144, JesseSort is slower than Python’s default sort.”
I’d like to see a much larger n but the charts in the research paper aren’t really selling JesseSort. I think as more and more “sorts” come out, they all get more niche. JesseSort might be good for a particular dataset size and ordering/randomness but from what I see, we shouldn’t be replacing the default Python sorting algorithm.
> That misses the point that there may be breakthroughs that are much harder or near impossible to make if you're familiar with the state-of-the-art.
If that's the point, you should maybe try and find even a single example that supports it. As the article points out, Krapivin may not have been familiar with Yao's conjecture in particular, but he was familiar with contemporary research in his field and actively following it to develop his own ideas (to say nothing of his collaborators). Balatro's developer may not have been aware of a particular niche genre of indie game[1], but they were clearly familiar with both modern trends/tastes in visual and sound design, and in the cutting edge of how contemporary video games are designed to be extremely addictive and stimulating. To me, these examples both seem more like the fairly typical sorts of blind spots that experts and skilled practitioners tend to have in areas outside of their immediate focus or specialization.
Clearly, both examples rely to some extent on a fresh perspective allowing for a novel approach to the given problem, but such stories are pretty common in the history of both math research and game development, neither (IMO) really warrants a claim as patently ridiculous as "the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before."
[1] And as good of a video game as Balatro is, there are plenty of "roguelite deckbuilder" games with roughly the same mechanical basis; what makes it so compelling is the quality of its presentation.
If your users are not fluent enough in Linux/Unix tools and conventions to be able to figure out how large a single standardized dotfile in their home directory is, I really don’t think the problem is “application developers are following a well-known standard.”
That’s probably why the suggestion was to look into psychedelic therapy, which utilizes a trained and experienced therapist in a controlled environment, rather than to hand their friend a strip of blotter.
Do you have a citation for the rationality of e^pi - pi? I couldn't find anything alluding to anything close to that after some cursory googling, and, indeed, the OEIS sequence of the value's decimal expansion[1] doesn't have notes or references to such a fact (which you'd perhaps expect for a rational number, as it would eventually be repeating).
Is the joke here that if you lie to people (on the Internet or otherwise), they’ll take it at face value for a little bit and then decide you’re either a moron or an asshole once they realize their mistake?
I've read the post you're responding to like 3 times, and after pondering it deeply, I'm pretty sure the conclusion of their line of thinking pretty definitively stops at "Apple should not be sending data off the device without the user requesting it." If you think otherwise, you should maybe provide more of an argument.
The line of thinking is right there: "not sending any info to anyone else anywhere at any time"
There are way more egregious privacy concerns than sending non-reversibly encrypted noisy photos to Apple. Why draw the line here and not the far worse things happening on your phone and computer right now?
The initiative is for the user to command their computer to communicate or not with the information of their choosing.
"Computer, I command thee to send this and only this information over the channel of my choosing, using following encryption scheme, for here be my seal of approval for anyone who might want to verify, and here be the key"
I understand the enthusiasm but from the business perspective it does not matter. Many businesses would fail if they go too deep on this. Their only audience would be people who are experts in the area. Other people are confused and disappointed since things are not working as they expect.
On Apple's scale, most people care about the things they can do, not about how it happens. For that reason, default matters when the option is only about the internal process pipeline and privacy.
As a result, it is enough to showcase that in case some expert investigates the matter, they show that privacy is considered in a reasonable level.
Maybe some day in the future these things are common knowledge, but I fear that the knowledge gap just increases.
Almost every single app today interacts with the network in some way.
You would be constantly annoying the user with prompt after prompt if you wanted to get consent for sending any relatively harmless data off the device.
FWIW, a "Network Access" app permission is one of the features that GrapheneOS provides. It is only setting offered to the user every single app install. It should be in base AOSP, and I have to wonder why it isn't already.
I'm curious what you do when you encounter issues in a 3rd party dependency, or (since it sounds like you do dev-ops work) in an infrastructure process or tool whose code you didn't write? I use auto-complete and LSP features pretty heavily myself in my day-to-day development work, but when debugging an issue, I sometimes run into issues with a 3rd party library, or a kubernetes component, or whatever, and it's necessary to jump into those codebases and understand what's going on, whether the bug is with the library or with our own integration, whether it's more expedient to patch the dependency or write a workaround in our own code, &c. And in those cases, I generally do not have a LSP running, or my editor properly configured beyond basic syntax highlighting, and I have to rely on, presumably, the same techniques that devs who don't use LSPs at all employ in their day-to-day work: being able to quickly read and understand other peoples' code, being able to approximate jump-to-def with find/grep/ag/rg, being able to effectively trace datatypes and control flow through multiple definitions/files, being able to take effective notes to improve my working memory, and so on.
Well yes I still do lots of grepping and just search in all files if needed. It is not exclusive but if I can use good IDE features it helps to be quicker.
I also have notes I write down where search is must have as my notes are not structured anyway.
nearly instantly searching through vasts amounts of data is one thing computers are amazing at and it is absurd not to leverage this capability wherever possible.
A close family member of mine is schizophrenic, and, as I understand it, they've always (or, at least, for the past 5-8 years) relied on a combination of drugs and these sorts of therapies to manage their symptoms. That is to say, the drugs are helpful for helping to keep their thinking organized and reducing the frequency and severity of hallucinations/delusions, but those issues never really go away completely, and sometimes a given drug regime stops being effective for whatever reason, so it's very important to have (and to practice!) strategies for identifying when your thoughts are "right" vs "wrong," and being able to deal with the problem effectively when it's the latter.
reply