Hacker News new | comments | show | ask | jobs | submit login
Bullshit asymmetry principle (wikipedia.org)
85 points by kyleblarson 65 days ago | hide | past | web | favorite | 38 comments



The most efficient way to deal with this asymmetry:

1. Pick a random sample of someone's claims and statements

2. Research them thoroughly, so as to figure out if they are bullshit

3. If a sufficiently large percentage of their statements are bullshit, do not bother listening to anything else they say. Not even to "give them a fair hearing", because it will occupy all your time and energy.

4. Re-calibrate every 5-10 years, in case the person/organization has changed over that period

The above algorithm is particularly useful in dealing with the steady stream of bullshit that arises out of particular media outlets and political figures.


This can be difficult though. For example, there is a frequent phenomenon on HN where certain posts arise for purposes of cheerleading the specific programming languages Nim and Julia.

A ton of the comments, disguising as good natured, that give more or less unqualified praise to these languages are actually mostly bullshit, eapecially two very specific strands of discussion, which are usually about Nim or Julia capabilities compared with Cython, and to a lesser degree sometimes Julia’s multi dispatch templating patterns.

It is so hard to wade through those comments and follow down every link and cultish argument or counter-counter-argument, meanwhile you’ve got people claiming that perfectly reasonable criticisms of those languages are themselves the “real” bullshit.

It turns into such a convoluted mess that everybody is certain that everybody else’s claims are the bullshit and that their own claims (which other people think are the bullshit) are genuinely refuting bullshit.

By the end of it, a strategy like yours becomes very hard because the time complexity of the research task gets intractable even for a small subset of claims.

Like literally once I had to go and read a ~30 page published research article to learn that a certain Julia differential equations library happens to use a particular multi dispatch templating pattern to offer a way to treat uncertainty modeling as a built-in, domain-specific construct, but that this was not some type of advantage of Julia or anything (since you could do literally the same thing at the same level of abstraction in Cython), and nor was it even a required aspect of the differential equation domain problem, it was just a sort of extra feature that worked out in that library, but which could be implemented (arguably more simply) without any of the multi dispatch behavior at all.

Blerg. I had to wade into all that, ten levels deep in some nested conversation with hard to believe one-sided support of Julia, all just to learn that one single point could be refuted!


> Blerg. I had to wade into all that, ten levels deep in some nested conversation with hard to believe one-sided support of Julia, all just to learn that one single point could be refuted!

What's exactly the problem here? You had to study a subject before discussing it? Isn't that the goal?


No, it was that on the surface the claims being made seemed flimsy, but the overall discussion was so full of debate and forceful counter-claims of which party is or isn’t bullshitting that as an outside reader I couldn’t tell anything about it. I didn’t have to read deeply into one niche topic to form an informed opinion about it. Rather I had to read deeply into one niche topic just to even close the loop on understanding if a basic sense that someone was bullshitting about a generic, high-level thing was warranted in a simple heuristic sort of way.

Basically, the convoluted bullshit acts like an obfuscator that makes you need to do an unreasonable amount of legwork to get to a point you can safely trust just a generic appraisal of whether something is / isn’t bullshit.

This is a problem because bandwidth is limited. You can’t go reading 30 page articles every time you want to decide which of two general strands of a conversation have any merit for further engagement, and the noise pumped by bullshitters (especially when they are claiming they’re not the bullshitter and that it’s really you or someone else) makes this harder and harder all the time.


That is a good way of solving the problem on a personal scale. ' But lets now add a social media platform where it has become easier than ever get a soapbox to thousands of people. This is where we reach a critical mass of bullshit dissemination where even if 99% of people disregard said bullshit you have a large magnitude of people will listen and continue to disseminate it on their own creating a very effective bullshit chamber which will continue on a feedback loop.

Technology has lowered the barrier of bullshit much greater than the barrier to refute bullshit.


When you can identify the source of bullshit to single entity, it's easier to notice and ignore.

What is really hard to recognize is bullshit that just "floats around" without single source.


Sorry, but that's not a very robust bullshit spotting algorithm (BSA). Here's a bullshit generation algorithm (BGA) that will beat your spotting algorithm every time.

1. Mostly speak truths.

2. Scatter your bullshit sparsely so it remains relatively difficult to detect.

3. Wrap your bullshit with caveats or innuendo so you can argue others misinterpreted you. When someone interprets your bullshit as bullshit, counter that they are bullshitting by intentionally misunderstanding your bullshit.

4. When you get caught in complete bullshit, double down on it with such conviction and fury that you convince people by force that it isn't bullshit.

I could try listing real life implementations of the above BGA, but it would likely surpass HN's character limit.


The problem is that posting anonymous bullshit is still effective to on some people. For example, there is someone who frequents Hacker News and likes to go around saying that features arising from Julia's type system don't "just work". He/she will post all of the time about how we are lying about all of our claims, how when we say it didn't take work it actually did, and how the features that come out are useless.

The specific mention is about uncertainty propagation in our differential equation solver library. It happened automatically because of the type system. There's evidence of the thread where we found out, since a user notified us that it works:

https://discourse.julialang.org/t/differentialequations-jl-a...

We have since been investigating the pharmacometric implications of this methodology and are seeing good promise in a paper that is coming out.

But then of course someone will come along and say "Cython can do it too". Yes, every Turing complete language can do everything. The point is how a 0 line of code feature led to a new avenue of research, and Julia's dependent compilation process allowed this to be a very optimized implementation. Of course, this little to no effect solution is pretty cool and something to share! However, as stated in the resource of this HN thread, bullshit propagates fast, so this other person had already come to this thread to start ragging on this work long before I could get here (since we were at a pharmacometrics conference getting feedback from the FDA about the uncertainty propagation algorithm! Just to show its real, check the dates of ACoP 2018).

So let me add a corollary to this principle: any sufficiently widespread work will have an anonymous disgruntled bullshitter that hates on it. I guess you're supposed to learn it's a compliment that someone runs around spending their time doing this to you. But hey... I wasn't going to respond anymore but the thread's focus was just way too perfect. See you next time mlthoughts2018: maybe the name will change soon to make it harder to bop the bullshit.


This is very similar to a principal I read from a guy that predicted correctly that Iraq had no chemical or nuclear weapons program.

Simple: If a source has been found to lie apply a weight of zero.


Nice that might be a worthwhile mindset, although I think 10 years might be a bit much.

Edit: I don't think I am qualified to say that, as I didn't live my second 10 yet

PS: Well you got to my no-BS list


I'm stealing this and spreading it as far as I can.


The problem with that is that other people will listen to their bullshit and, chances are, nobody will contradict them. So the bullshit spreads.


There is a german proverb a have often heard as a kid but now that I searched for I could not find any sources. It goes like "Wenn ein dummer einen Stein ins Wasser wirft, braucht es 10 kluge, um ihn wieder raus zu bekommen." which translates as "If a dumb person throws a stone into the water it takes 10 smart people to get it back out."


Turkish has a very similar proverb. I wonder if it is a literal translation.


https://www.ribbonfarm.com/2014/02/07/an-information-age-glo...

Bullshit: (1) Data (often a firehose) produced by someone who is indifferent to the truth or falsity of what is being said (my short version of of Harry Frankfurt’s definition in On Bullshit. See also, the Wikipedia summary). (2) Data that appears to contain more information than it actually does (my version, which I think is roughly equivalent). (3) Noise randomly tagged with truth-values to give it apparent legibility (4) Non-requisite variety

Gollumization: The impoverishment of a system through ephmeralization and weaponized attention.

Ephemeralization: The natural conversion of signal into bullshit through creative destruction (derived from original definition due to Buckminister Fuller).

Arbitrage: Exploiting bullshit for profit.

Weaponization: Separation of data into information and bullshit, and deliberate direction of the two signals at different systems (generally a hunting party and a cargo cult respectively), with the intention of creating serendipity in one system and zemblanity in another.


Russian propaganda war is based on this principle.

They are not transmitting coherent alternative narrative. They as just delivering constant stream of bullshit faster than it can be debunked. Just increasing the cognitive load and creating cognitive dissonance is a valid propaganda method.


It's reminiscent of what's going on in China now: https://en.wikipedia.org/wiki/50_Cent_Party


Indeed! Our "architect" (cough) stuck us with mass microservices even though we are too small a group to need them. Bloatville. He kept quoting "separation of concerns" and other "magic Legos" abstraction/reuse/modularity BS memes. Plain ol' OOP classes do all the same.

I've been fighting to debunk it, but so far losing. It's more a case of momentum and if-aint-broke-dont-fix than logic so far. Nobody seems to want productivity enough yet to care, so if they pay me to mow the lawn with tweezers, I'll tweeze away...


People with "architect" in the title generally are a problem especially if they don't code anymore. I think architecture should done by the senior engineers who work on the codebase. If necessary they still can coordinate with other teams.


Oh hey, I had the same issue. Except micro-services eventually lost.

Having class libraries in NuGet did work well for us (we are open source).

Next trend to die soon: TDD / "Unit tests" that mock half the app


The way to fight bullshit is not by trying to refute it, which would be futile due to the bullshit asymmetry principle.

The way to go is to offer a compelling alternative.

Of course, this applies only if the bullshit has been confirmed as such and if the person promoting it has shown to be uninterested in a sincere and fair argumentative discussion.


In ideal world, your technical argument would be enough for higher-ups to take your concerns seriously. In real world, it's mostly how much they like and trust you. If they don't, then validity of your argument will be of no value.


In the real world you can't evaluate every argument. It just takes too long. I definitely filter by people who I know usually have good ideas vs. others whose ideas never make sense.


I feel like this can be related to 'entropy will always increase in isolated system'. Creating bullshit is easy(require little energy), which increases entropy. while debunk bullshit is trying to reduce entrypy, so lot of energy need to be put in the the system?


Second Law of Bullshit.



As a legal term and technique:

Chewbacca defense https://en.wikipedia.org/wiki/Chewbacca_defense


Not always but often. My two cents: a few years ago we were building a cryptographic extension to support ECC and since ECC and protocols using it are relatively new comparing to RSA or Diffie-Hellman it is more difficult to answer questions quickly. Someone on the customer side asked a question combining acronyms like ECDH, ECIES, etc in a way that you cannot make a clear connection between all the concepts. We spend a few days just trying to make sense of the question to finally realized it was no sense.


Refuting bullshit is actually 10x easier than coming up with bullshit in the first place.

The reason people think it's the other way around is a testament to the power of appeal to authority.


Refuting it is easy. Convincing someone that the bullshit they believe is wrong is an entirely different matter.

See: Flat earthers, anti-vaxxers, and birthers.


Actually what I wrote is bullshit. I have no idea what the average time is to come up with bullshit vs. average time spent refuting it.

So color me convinced.

On the other hand, it took me about 10 seconds to write the previous paragraph but about 100 seconds to decide how to word my initial bullshit.

So I my initial bullshit stands. :)


Christopher Date and Hugh Darwen, in their work on type systems and the relational model, like to use the Principle of Incoherence: "A principle, sometimes invoked in defense of a less than fully successful attempt at criticizing some technical issue, to the effect that it's hard to criticize something coherently if what's being criticized is itself not very coherent in the first place. Occasionally referred to, a little unkindly, as The Incoherent Principle."


dual principle - "The amount of energy [and intelligence] needed to understand refutation of a bullshit is an order of magnitude bigger than to consume the bullshit.


Since you are now dealing with observables are you bordering on the Quantum Bullshit Theory?

Maybe an S operator for scattering bullshit.

Time independent Hamiltonian for Steady-State bullshit.

etc.


Given the reality distortion, frame dragging and other properties i think it is better described by gravity style theories.


This is basically how cryptography works (one way functions).


Could this be used as part of a formal definition of bullshit?


Telfon Don Drumpf generates BS faster than even John Moschitta Jr. can call it out.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: