1. Pick a random sample of someone's claims and statements
2. Research them thoroughly, so as to figure out if they are bullshit
3. If a sufficiently large percentage of their statements are bullshit, do not bother listening to anything else they say. Not even to "give them a fair hearing", because it will occupy all your time and energy.
4. Re-calibrate every 5-10 years, in case the person/organization has changed over that period
The above algorithm is particularly useful in dealing with the steady stream of bullshit that arises out of particular media outlets and political figures.
A ton of the comments, disguising as good natured, that give more or less unqualified praise to these languages are actually mostly bullshit, eapecially two very specific strands of discussion, which are usually about Nim or Julia capabilities compared with Cython, and to a lesser degree sometimes Julia’s multi dispatch templating patterns.
It is so hard to wade through those comments and follow down every link and cultish argument or counter-counter-argument, meanwhile you’ve got people claiming that perfectly reasonable criticisms of those languages are themselves the “real” bullshit.
It turns into such a convoluted mess that everybody is certain that everybody else’s claims are the bullshit and that their own claims (which other people think are the bullshit) are genuinely refuting bullshit.
By the end of it, a strategy like yours becomes very hard because the time complexity of the research task gets intractable even for a small subset of claims.
Like literally once I had to go and read a ~30 page published research article to learn that a certain Julia differential equations library happens to use a particular multi dispatch templating pattern to offer a way to treat uncertainty modeling as a built-in, domain-specific construct, but that this was not some type of advantage of Julia or anything (since you could do literally the same thing at the same level of abstraction in Cython), and nor was it even a required aspect of the differential equation domain problem, it was just a sort of extra feature that worked out in that library, but which could be implemented (arguably more simply) without any of the multi dispatch behavior at all.
Blerg. I had to wade into all that, ten levels deep in some nested conversation with hard to believe one-sided support of Julia, all just to learn that one single point could be refuted!
What's exactly the problem here? You had to study a subject before discussing it? Isn't that the goal?
Basically, the convoluted bullshit acts like an obfuscator that makes you need to do an unreasonable amount of legwork to get to a point you can safely trust just a generic appraisal of whether something is / isn’t bullshit.
This is a problem because bandwidth is limited. You can’t go reading 30 page articles every time you want to decide which of two general strands of a conversation have any merit for further engagement, and the noise pumped by bullshitters (especially when they are claiming they’re not the bullshitter and that it’s really you or someone else) makes this harder and harder all the time.
Technology has lowered the barrier of bullshit much greater than the barrier to refute bullshit.
What is really hard to recognize is bullshit that just "floats around" without single source.
1. Mostly speak truths.
2. Scatter your bullshit sparsely so it remains relatively difficult to detect.
3. Wrap your bullshit with caveats or innuendo so you can argue others misinterpreted you. When someone interprets your bullshit as bullshit, counter that they are bullshitting by intentionally misunderstanding your bullshit.
4. When you get caught in complete bullshit, double down on it with such conviction and fury that you convince people by force that it isn't bullshit.
I could try listing real life implementations of the above BGA, but it would likely surpass HN's character limit.
The specific mention is about uncertainty propagation in our differential equation solver library. It happened automatically because of the type system. There's evidence of the thread where we found out, since a user notified us that it works:
We have since been investigating the pharmacometric implications of this methodology and are seeing good promise in a paper that is coming out.
But then of course someone will come along and say "Cython can do it too". Yes, every Turing complete language can do everything. The point is how a 0 line of code feature led to a new avenue of research, and Julia's dependent compilation process allowed this to be a very optimized implementation. Of course, this little to no effect solution is pretty cool and something to share! However, as stated in the resource of this HN thread, bullshit propagates fast, so this other person had already come to this thread to start ragging on this work long before I could get here (since we were at a pharmacometrics conference getting feedback from the FDA about the uncertainty propagation algorithm! Just to show its real, check the dates of ACoP 2018).
So let me add a corollary to this principle: any sufficiently widespread work will have an anonymous disgruntled bullshitter that hates on it. I guess you're supposed to learn it's a compliment that someone runs around spending their time doing this to you. But hey... I wasn't going to respond anymore but the thread's focus was just way too perfect. See you next time mlthoughts2018: maybe the name will change soon to make it harder to bop the bullshit.
Simple: If a source has been found to lie apply a weight of zero.
Edit: I don't think I am qualified to say that, as I didn't live my second 10 yet
PS: Well you got to my no-BS list
Bullshit: (1) Data (often a firehose) produced by someone who is indifferent to the truth or falsity of what is being said (my short version of of Harry Frankfurt’s definition in On Bullshit. See also, the Wikipedia summary). (2) Data that appears to contain more information than it actually does (my version, which I think is roughly equivalent). (3) Noise randomly tagged with truth-values to give it apparent legibility (4) Non-requisite variety
Gollumization: The impoverishment of a system through ephmeralization and weaponized attention.
Ephemeralization: The natural conversion of signal into bullshit through creative destruction (derived from original definition due to Buckminister Fuller).
Arbitrage: Exploiting bullshit for profit.
Weaponization: Separation of data into information and bullshit, and deliberate direction of the two signals at different systems (generally a hunting party and a cargo cult respectively), with the intention of creating serendipity in one system and zemblanity in another.
They are not transmitting coherent alternative narrative. They as just delivering constant stream of bullshit faster than it can be debunked. Just increasing the cognitive load and creating cognitive dissonance is a valid propaganda method.
I've been fighting to debunk it, but so far losing. It's more a case of momentum and if-aint-broke-dont-fix than logic so far. Nobody seems to want productivity enough yet to care, so if they pay me to mow the lawn with tweezers, I'll tweeze away...
Having class libraries in NuGet did work well for us (we are open source).
Next trend to die soon: TDD / "Unit tests" that mock half the app
The way to go is to offer a compelling alternative.
Of course, this applies only if the bullshit has been confirmed as such and if the person promoting it has shown to be uninterested in a sincere and fair argumentative discussion.
The reason people think it's the other way around is a testament to the power of appeal to authority.
See: Flat earthers, anti-vaxxers, and birthers.
So color me convinced.
On the other hand, it took me about 10 seconds to write the previous paragraph but about 100 seconds to decide how to word my initial bullshit.
So I my initial bullshit stands. :)
Maybe an S operator for scattering bullshit.
Time independent Hamiltonian for Steady-State bullshit.