Yeah, it's very much Alex Jones's favoured tactic as well.
I think the Gish Gallop is one of those situations where it's entirely valid to deploy an ad hominem - and the more ruthlessly, the better - since the only viable and time-efficient approach is to discredit the person so that everything they say is ignored by as many people as possible[0].
Some people are - understandably - squeamish about ad hominems, but if a person is a known bullshitter, liar, or simply parrots whatever they've heard elsewhere without regard for accuracy, it's often the best way of dealing with them becauase you won't have time to deal with the merits, or lack thereof, of their arguments individually.
The problem comes when the ad hominem is your default approach as, again, it so often is in the media and politics... and, indeed, in work and everyday life. I don't enjoy seeing it deployed against people acting or arguing in good faith, even where I disagree with them.
But if somebody is acting or arguing in bad faith, have at them.
[0] Yes, there might be some truth mixed in with the lies and bullshit, but you'll be able to get that same truth from better and more reliable sources. It's a better use of time and mental energy to simply ignore the unreliable source.
Would you have a link showing it in action? Thanks.
I think I've seen it too now that I think of it. What's interesting about it is that it is not really a logical fallacy since it has nothing to do with logic.
This is the way of Russian misinformation and propaganda. Timothy Snyder discusses at length in The Road to Unfreedom.
And it has spread. Witness U.S. hard-right politics. Eventually the "Gish Gallop" becomes an assault on factuality itself. As Jason Stanley says, they goal of ending factuality is to "smash truth and replace it with power." You can hear it in such statements as the one a right-wing lawyer friend made: "There is no truth in politics."
Hmm, not excusing the Russians, but in the spirit of saving the meaning of "Gish Gallop":
Russian propaganda does not strike me at all as using Gish Gallop tactics. They repeat fairly simple points in multiple venues without trying to overwhelm the opponent using rhetorical superiority.
I do think that the Western hawks like Sikorski and McFaul are far closer to using Gish Gallop in debates.
I also think that the extent of Russian propaganda is overstated. When it is there, it is always obvious. Western style consent manufacturing is way more pervasive and stealthy.
turns out the russians can do it just as well, if not better (see above for examples, most of which the US never said w/r/t Iraq or Afghanistan, making it uniquely russian gish gallop)
reminds me of russian propaganda from earlier years, too, when they invaded Ukraine earlier, when they genocided millions in Ukraine, etc.
it seems russia needs no bad role models, based on their own behavior
Citing almost a century old incident that might have happened but ducking when asking about the recent killings by West in Bosnia, Serbia, Syria, Yemen Iraq and Afghanistan and talking about what X might have done is "precisely the propaganda I spoke of, when I brought up West genocide of millions& of Muslims.
A friendly advise: Clean your own backyard first before preaching others, till then carry on living in a bubble created by Fox, CNN, BBC etc. Cheers.
thank you for again demonstrating russian propaganda gish gallop
of course it would be a waste of time to address the layers of nonsense due to Brandolini's law,
but suffice it to say that russia, who never apologized for genociding millions of Ukrainians, repeats the same gish gallop in their propaganda to distract from their war on, and latest attempted genocide of, Ukraine
thank you for again demonstrating Western propaganda gish gallop
of course it would be a waste of time to address the layers of nonsense due to Brandolini's law,
but suffice it to say that West, who never apologized for genociding millions of Muslims, repeats the same gish gallop in their propaganda to distract from their war on, and attempted genocide of, Syria and Yemen
-----
I fixed it for you. Now I know you will have a knee jerk and get back. And.. I am not even a Russian but I do believe that it's time for the world to embrace the fact the existence of Russia and China and end of hegemony of US and Dollar.
And I will not respond you further. Keep living in the bubble inflated by Western media.
thank you for again demonstrating russian propaganda gish gallop, from multiple whataboutisms, to false equivalencies, to outright fabrications, to tu quoque fallacies, to ad hominem fallacies, all wrapped up neatly in an "I know you are but what am I"
of course it would be a waste of time to address the layers of nonsense due to Brandolini's law,
but suffice it to say that russia, who never apologized for genociding millions of Ukrainians, repeats the same gish gallop in their propaganda to distract from their war on, and latest attempted genocide of, Ukraine,
just like you are currently using it to distract from the topic at hand, which is russian propaganda
it seems, unfortunately, you have a chip on your shoulder preventing you from sticking to this topic and discussing it honestly, as do most people who repeat such russian propaganda, and all you have instead is "no u". Indeed, you were never "responding" in the first place, just complaining that people were criticizing russia and attacking them for it
this explains why russia has consistently lost UN votes in which they repeat the exact gish gallop propaganda you have, rather than defend their actions on their own merit – seems the world thinks russia should clean their own house before preaching to others about theirs
Excuse me for my tone, but might you actually be gish galloping here?
> "Russian propaganda does not strike me at all as using Gish Gallop tactics. They repeat fairly simple points in multiple venues without trying to overwhelm the opponent using rhetorical superiority."
First, the point isn't rhetorical superiority per se, but to "overwhelm their opponent by providing an excessive number of arguments with no regard for the accuracy or strength of those arguments." Straw-man arguments, equivocations, outright lies, etc.
This is precisely what Russian propaganda does. It sponsors and promotes all sorts of contradictory theories, statements, and so on, without necessarily countering opposing statements. This works in part because of Brandolini's law, as this thread initially stated. So, the original truth or argument against which they are opposed becomes "just another theory" and there's no time to actually counter all of the bad arguments, even if they are ridiculous.
> "I do think that the Western hawks like Sikorski and McFaul are far closer to using Gish Gallop in debates."
I can't find any way that this makes sense, except that both Sikorski (a Polish politician) and McFaul (former US ambassador to Russia) might be seen as anti-Russian.
> I also think that the extent of Russian propaganda is overstated. When it is there, it is always obvious.
It's far from obvious, since Russian propaganda often simply supports a variety of opposing views in order to muddy the waters, again the point.
> Western style consent manufacturing is way more pervasive and stealthy.
"Manufacturing consent" wasn't the issue; this is also a way of fracturing the focus, equivocation.
An example of Russian gish galloping from The Road to Unfreedom:
At 1:20 p.m., Malaysia Airlines Flight 17 was struck by hundreds of high-energy metal projectiles released from the explosion of a 9N314M warhead carried by a missile fired from that Russian Buk launcher at Snizhne. The projectiles ripped through the cockpit and instantly killed the pilots, from whose corpses some of the metal was later extracted. The aircraft flew apart ten kilometers above the earth’s surface, its passengers and their possessions scattered over a radius of fifty kilometers. Girkin boasted that his people had shot down another plane over “our sky,” and other commanders made similar remarks. Alexander Khodakovskii told the press that a Russian Buk was active in the theater at the time. The Buk was hastily withdrawn from Ukraine back to Russia, and photographed along the way with an empty missile silo. What had happened was quite clear, and has since been confirmed by the official Dutch-led investigation.
The law of gravity seemed to challenge, at least for a few hours on the afternoon of July 17, 2014, the laws of eternity. Surely the passengers who died were the victims, not the Russian soldiers who fired the missile? Even the Russian ambassador to the United Nations was thrown for a moment, using the excuse of “confusion” to explain how a Russian weapon had brought down a civilian airliner. Yet Surkov’s apparatus acted quickly to restore the Russian sense of innocence. In a typical mark of tactical brilliance, Russian television never denied the actual course of events: that a Malaysian airliner had been brought down by a Russian weapon fired by Russian soldiers taking part in an invasion of Ukraine. Denying the obvious only suggests it; defeating the obvious means engaging it from the flanks. Even under stress, Russian media managers had the presence of mind to try to change the subject by inventing fictional versions of what had happened.
On the very day the plane was shot down, all of the major Russian channels blamed a “Ukrainian missile,” or perhaps a “Ukrainian aircraft,” for the downing of MH17, and claimed that the “real target” had been “the president of Russia.” The Ukrainian government, according to the Russian media, had planned to assassinate Putin, but by accident had shot down the wrong aircraft. None of this was vaguely plausible. The two planes were not in the same place. The failed assassination story was so ludicrous that RT, after trying it on foreign audiences, did not pursue it. But within Russia itself, the moral calculus was indeed reversed: by the end of a day on which Russian soldiers had killed 298 foreign civilians during a Russian invasion of Ukraine, it had been established that Russia was the victim.
The following day, July 18, 2014, Russian television scattered new versions of the event. Myriad inventions were added to the multiple fictions, not to make any of them coherent, but to introduce further doubts about simpler and more plausible accounts. Thus three Russian television channels claimed that Ukrainian air traffic controllers had asked the pilots of MH17 to reduce their altitude. This was a lie. One of the networks then claimed that Ihor Kolomois’kyi, the Ukrainian Jewish oligarch who was governor of the Dnipropetrovsk region, was personally responsible for issuing the (fictional) order to the air traffic controllers. In an echo of Nazi racial profiling, another network later provided an “expert” on “physiognomy” who claimed that Kolomois’kyi’s face demonstrated his guilt.
Meanwhile, five Russian television networks, including some that had peddled the air traffic control story, claimed that Ukrainian fighter aircraft had been on the scene. They could not get straight just which kind of aircraft this might have been, providing pictures of various jets (taken at various places and times), and proposing altitudes that were impossible for the aircraft in question. The claim about the presence of fighter planes was untrue. A week after the disaster, Russian television generated a third version of the story of the downing of MH17: Ukrainian forces had shot it down during training exercises. This too had no basis in fact. Girkin then added a fourth version, claiming that Russia had indeed shot down MH17—but that no crime had been committed, since the CIA had filled the plane with corpses and sent it over Ukraine to provoke Russia.
These fictions were raised to the rank of Russian foreign policy. When asked about MH17, Russian Foreign Minister Sergei Lavrov repeated the inventions of Russian media about air traffic controllers and nearby Ukrainian fighters. Neither of his claims was backed by evidence and both were untrue.
Russian media accounts were impossible not only as journalism but also as literature. If one tried to accept, one by one, the claims of Russian television, the fictional world thus constructed would be impossible, since its various elements could not coexist. It could not have been the case that the plane was shot down both from the ground and from the air. If it had been shot down from the air, it could not have been shot down by both a MiG and an Su-25. If it had been shot down from the ground, this could not have been the result of both a training accident and an assassination attempt. Indeed, the Putin assassination story contradicted everything else that the Russian media claimed. It made no sense to say that Ukrainian air traffic controllers had communicated with the Malaysian pilots of MH17 as part of a plot to shoot down the Russian presidential aircraft.
But even if all of these lies could not make a coherent story, they could at least break a story—one that happened to be true. Although there were certainly individual Russians who grasped what had happened and apologized, the Russian population as a whole was denied the possibility to reflect on its responsibility for a war and its crimes. According to the surveys of the one reliable sociological institute in Russia, in September 2014 86% of Russians blamed Ukraine for shooting down MH17, and 85% continued to do so in July 2015, by which point the actual course of events had been investigated and was clear. Russian media urged Russians to be outraged that they were blamed.
Ignorance begat innocence, and the politics of eternity went on.
Snyder, Timothy. The Road to Unfreedom (pp. 179-182). Crown. Kindle Edition.
You are free to use your meaning of Gish Gallop and distract from the original discussion, which has nothing to do with whether the Russians have done horrible things (they have) or not.
Ironically, Snyder, while possibly correct on everything, is using Gish Gallop wall-of-text style.
If I have or had any misunderstanding about the term, it might be that gish galloping seems to refer esp to conversations, like oral conversations bounded in time? Whereas the Russian-style “firehose of falsity” exists along several timescales, using a wide number of voices all chattering in contradictory ways. Nevertheless, my definition or meaning of the term isn't "mine"; it's the standard.
Edit: Fixed the wall of text & some typos/false phone swypes here.
I have a similar adage that says, the notion that evil triumphs when good men do nothing is inaccurate. It's more like "1 evil person will triumph unless 1000 good men do something."
As a library maintainer, closing and empathetically conveying why a pull request is not a net benefit to the project is an order of magnitude more effort than what it takes to throw up not-well-motivated pull requests on someone else's project.
I would argue that libraries have an additional problem in that there's a many to one relationship between users and maintainer. It feels like there should be a way to design the ecosystems so that the maintainer doesn't end up with so much default expectations.
I think a more generalized version of OP's concept goes into a lot of things.
I thought about this when I first played Minecraft on a multiplayer server where people had vandalized the world. Much harder to keep it tidy than making it ugly. I think they usually solved this by giving mods ability to rewind. Crude but effective.
Like, what makes Wikipedia work? It's easy to go back in the version history when somebody breaks stuff?
It looks like there should be a bunch of tricks that could be used to design these systems so that doing the right thing is easier than breaking stuff.
I see pull requests on my projects from new/outside contributors as suggestions, not actual demands.
Refusing a suggestion doesn't prompt for a full explanation, it also doesn't mean it won't ever be reconsidered potentially.
People who engage in open source contributions are usually aware of this, the fact that they generally can fork and use their own "improvement" for themselves is usually enough, and the people who feel entitled to get their code merged in your repository are usually not the kind of regular contributors you want to keep around in my experience.
There's also similarities with making any change to software.
It's relatively easy to merge code in, but takes much more effort and thought to remove the code later. Especially once other things are dependant on it...
I like the very old version from Johnathan Swift in 1710 at the bottom of that:
"Few lies carry the inventor’s mark, and the most prostitute enemy to truth, may spread a thousand without being known for the author: besides, as the vilest writer has his readers, so the greatest liar has his believers: and it often happens, that if a lie be believed only for an hour, it has done its work, and there is no farther occasion for it. Falsehood flies, and truth comes limping after it; so that when men come to be undeceived, it is too late; the jest is over, and the tale has had its effect: like a man, who has thought of a good repartee, when the discourse is changed, or the company parted; or like a physician, who has found out an infallible medicine, after the patient is dead."
We know he didn't originate it, do we really know he never said it? Or perhaps even helped popularise it. I can certainly understand why it might be thusly attributed - whereas it doesn't (to my ears) sound the slightest bit Churchillian.
At any rate the fact it is so often incorrectly attributed seems fitting...
Its usually not just random bullshit that gets spread around, but believable stories that follow a similar sort of logic to those we generally agree are true. Though, what might seem to you unbelievable may be dramatically different from what others consider so, and some things you think are true others might call bullshit. Though the material application of knowledge, in the end, always demonstrates its useful validity.
It's usually stories that support the sharer's worldview, or that they (in some way), almost 'want' to be true. So a die hard vegan might be more likely to share a story about the dangers of eating meat than a barbecue enthusiast, and someone who believes their political side to be the 'moral' one is probably going to share stories about good things done by those on their side and bad things done by their enemies.
So long as the story supports those beliefs the viewer holds dear, they'll believe it regardless of its inaccuracy. Same goes with media outlets if a fake story matches their political slant.
"Reality" is just a story you make up when coming to terms with the fact that you can't always get what you want (Cf. Freud). There is a always a friction between the Real and any given "reality."
at first, sure. challenge your assumptions even a little and the stress becomes imperceptible. beyond that, understanding that people will lie to you for their own gain, and you won't even believe anyone by default anymore.
it's quite healthy to question stuff in this way, I think.
It is also random bullshit. There is an entire group of people convinced that if you are shot with a 9 MM round, it "blows the lung out of the body".[1] They also think that 9 MM is a high caliber round, and that the AR in AR-15 stands for "assault rifle".
These lies are knowingly spread by the White House and the Democrats. Anyone who devoted 1 minute of thought and research would know they are all false, but we're in an agenda-driven world, where fact, truth, and reality are all secondary to political ideology.
> These lies are knowingly spread by the White House and the Democrats.
This sentence appears an example of Brandolini's Law. The problematic phrases are "the White House" and "the" Democrats. I'll explain.
- It is one thing to cite a source that shows a politician making an incorrect statement. That's fair and important.
- It is quite another to claim that a group of people are knowingly spreading these incorrect facts. It is easy to claim <some group> is spreading lies without specifying (1) the members of the group involved; (2) the mechanism by which these people speak for the group; and (3) why one's definition of the group is reasonable in context. Not to mention (4) Why is this particular detail worth mentioning in isolation from the broader context?
- Perhaps most importantly, when someone writes (or speaks) (5) is that person writing (or speaking) in ways that maximize the desired interpretation? And good luck with that, given the realities of your audience's background knowledge, reading comprehension, attention span, and biases.
I'll posit an even more fundamental basis for Brandolini's Law that has nothing to do with intentional malice or deception: the imprecision of human language makes it inherently difficult to fact-check someone else's sentence, because one has to interpret the sentence before criticizing it. This leaves some room for the original author to deflect the criticism by saying "I didn't mean that" (and this might be genuine, fair, and reasonable!) ... but meanwhile no clarification was issued (seriously, WTF?, why aren't corrections better integrated in the source material?) and many people are still reading the original statement and interpreting it in some subjective way.
Sometimes I think it is amazing human societies function at all.
I also find it amazing that human societies survive at all. it would be interesting to find the answer to what is the minimum inefficiency required for a complex system to stop functioning. Of course, this question itself needs more precision, but you get the idea. For example, I've been in traffic jams in some cities with terrible infrastructure. Seems like a total hopeless chaotic mess, and yet at the end of the day everyone gets to their destination.
The fact that people don't understand relative ballistics, or the acronyms and trade names of arms manufacturers, is totally unrelated to the fact that you shouldn't be able to buy an AR-15 for $600 in a strip mall. I grew up shooting, but I'm sick and tired of waking up to another mass shooting every couple of days. This never happened before Columbine and Aurora.
You can nitpick this and that, but that doesn't mean anything to people praying their kids are safe at school, or their partner comes back from the grocery store.
I thought "you called it a clip not a magazine, so your argument is invalid" went out of style a few years ago. I haven't seen that one used in the wild for a bit.
I'm not sure I understand why it's important that your average person has a completely accurate understanding of which guns blows out lungs and which guns only puncture them. Getting shot by any gun will probably result in a permanent injury of some kind, to say the least.
And that's not a position on gun rights, I won't state mine just to keep this thread on topic.
This sounds like a molehill being propped up as a mountain to distract and derail people from addressing actual topics.
Has it though? It still seems to me the most dangerous falsehoods are spread on the parts of the internet receiving the least "sunlight".
For myself I still value opinions and/or claims that have been exposed to the greatest number of eyeballs and opportunities to respond, and I'd like to think most HN readers would feel similarly.
I think it is because once somebody "buys into" the false information they now in a sense "own it".
Once they have the faulty information they often spread it to others. Therefore they do not want it to be proven wrong, because they don't want to be proven wrong because that would prove that they have been spreading incorrect, unreliable information. That would mean that they can not be trusted. So, there is a lot pf resistance against correcting the incorrect information.
It's not a new idea at all - certainly a lot older than 2013.
This adage, and variations of it, get attributed to Mark Twain and Winston Churchill, amongst others: "A lie gets halfway around the world before the truth has a chance to get its pants [or boots] on."
I'm pretty surprised the Wikipedia article makes no mention of this since there's clearly prior art for the concept.
The solution isn't to create an LLM to debunk the bullshit, it's to a second LLM that will double down on the bullshit so that the onus is on LLM1 to debunk.
Didn't know about it, so I had to check it out [1]. Looks like another corporate bleak thing where they have to game-fy and infantilise a so called "process" in order to fool some execs into paying money to newly found experts in this new business technique. Capitalism is really doomed with these kind of people running the show, Schumpeter was right.
If you've ever worked with the kind of organisation where this technique is valuable, you'll understand it's exactly that: fooling people into telling you what they actually need to build, instead of what their Serious Businessperson Cosplay persona tells them they need.
GPT makes this 100x or maybe even 1000x. On the other hand, can we potentially train generative AI to detect and refute BS as well? It may be our only hope.
> On the other hand, can we potentially train generative AI to detect and refute BS as well? It may be our only hope.
LLMs store their training information in an incredibly lossy format. You're going to need some kind of different approach if you want one to tell the difference between plausible-sounding bullshit and implausible-sounding truth.
GPT is also pretty good at cutting through BS. It can detect logical fallacies for instance or explain a lack of rigor in a discussion. Depends on how you fine tune it, couple it with an external fact database and you could get it to cite its sources. Couple it with a prolog engine AND a fact database and it could modus pwnens ur ass.
GPT will not be able to detect contradictions / logical fallacies generated by GPT itself or similar LLMs. If it could spot the mistake, it wouldn't make it in the first place.
The other issue is that the generated content might be overall composed of true facts, but used to manipulate via less in the face techniques. Things like agenda setting, flooding with content with no lies, but a particular interpretation of those facts etc.
> GPT will not be able to detect contradictions / logical fallacies generated by GPT itself or similar LLMs. If it could spot the mistake, it wouldn't make it in the first place.
That is absolutely NOT true. Try it. Next time it does it, quote it, and ask it to find the logical fallacy and it will.
There isn't another session following the existing session asking it to double check its work. It is running open loop.
Humans exhibit the SAME behavior. They make logical fallacies all the time, but if you ask them to identify the logical fallacy in a passage of their own text they can spot it easily. Attention to Logical Fallacies Is All You Need.
GPT is not Spock, but you could make it Spock by combining LLMs and external tools and fact databases.
----
Please spot any potential logical fallacies in this statement
> GPT will not be able to detect contradictions / logical fallacies generated by GPT itself or similar LLMs. If it could spot the mistake, it wouldn't make it in the first place.
This statement contains a few potential logical fallacies:
False dilemma (also known as false dichotomy or either-or fallacy): The statement implies that either GPT can detect all logical fallacies and contradictions, or it cannot detect any of them. In reality, GPT's ability to detect logical fallacies could be imperfect, meaning that it can identify some fallacies but still make others.
Circular reasoning (also known as begging the question): The statement assumes that GPT cannot detect logical fallacies generated by itself or similar LLMs, without providing evidence or reasoning to support this claim.
Hasty generalization: The statement seems to imply that if GPT makes a mistake, it must be unable to detect that mistake in general. However, GPT's performance can be inconsistent, and it might sometimes make mistakes that it can, in fact, detect in other contexts.
Scientists funded by special interest groups could easily design studies in a way to show the results desired by their benefactors. In fact, that's exactly what happened with Big Tobacco to 'disprove' the link between smoking and cancer. It's easier to introduce bias into a study than to prove that bias was introduced. It's like with complex computer simulations, if you change one seemingly insignificant variable, the simulation could give you a completely different result.
Looking at incentives can lead you astray in the short term (due to complexity of incentives or hidden incentives), but in the long run, incentives provide the most predictable framework for figure out the behavior of people and animals.
Especially with multi-agent systems (anything to do with people or biology), which are inherently vastly more complex and harder to predict than mechanical systems, it's very important to know the incentives of the people making the statements.
Most often they are overestimating their confidence with the bias toward their own incentives.
It can be debated about whether that’s a logical fallacy. If you ask ten experts for their studies on something and you find a reasonable correlation (eg all the ones promoting smoking are funded by Tabacco lobbyists while the rest have a more natural distribution of results and don’t show any correlation of potential undue influence), it’s not a logical fallacy to suggest “hey, this argument is being made in good faith and the reason I think this is because of the funding sources for this person”. Indeed, we know this was an explicit strategy employed by Tabacco with lung cancer (they knew it was a problem but intentionally funded research to get the contrary results to muddy the waters) and oil companies with climate change (same thing).
Only if you believe that work should not be evaluated on its own merits. If the methodology is good, the sample sizes big enough, the statistics correctly done, etc., I don't care if Mickey Mouse wrote the study.
The climate change example is bad because oil companies have been actively suppressing correct evidence showing the existence of climate change since the 1980s. I'm less familiar with the tobacco studies, but I wouldn't be surprised if they've been doing the same thing, just on longer time scales.
> If the methodology is good, the sample sizes big enough, the statistics correctly done, etc., I don't care if Mickey Mouse wrote the study.
I’d agree with this but in the real world, we usually don’t and can’t really know if the methodology and statistical analysis was solid and rigorously followed as described in the paper – if it they were even adequately described. The only way to be sure is to replicate the study and that costs time and money (as per Brandolini's Law).
Evaluating incentives (e.g., follow the money) is a useful heuristic to evaluate the creditability and trustworthiness of a particular study.
Most people tend to try to engage the world with how it is rather than how they’d like it to be. It’s definitely not as simple as you make it out to be. They employ both strategies - they try to suppress real studies and they try to generate bullshit on top to confuse them situation / overwhelm the capacity of the system to deal with it.
Sure it’s an ad hominem attack to use past behavior to judge someone’s current or actions statements and taking actions as a result (eg ignoring them). And yet, is that a logical fallacy? I don’t think so but you seem to have an opposite conclusion.
I'm tempted to say that claiming it's naivete to expect people to do work to evaluate a research paper is naive, as well as ad hominem. If you don't have the time or expertise to do so, then do the rest of us a favor and don't try.
Now, if you're talking about using such criteria as a heuristic to decide which papers to put effort into evaluating, then we have another discussion to have.
That’s exactly what I’m saying. I don’t bother trying to evaluate a paper because a) my area of expertise only extends so far b) papers are largely worthless in their ability to communicate where the research done was valid. The replication crises and many many fraudulent papers surely would convince anyone that there’s a real problem with our current system. Maybe you’re a brilliant mind that can separate the wheat from the chaff. Most people, even leading researchers in the field, have trouble though so I don’t feel like I’m in bad company.
I am sympathetic with your point of view, but I find myself in a position of what Scott Alexander calls “epistemic learned helplessness”[0]. My powers of understanding the correct method of scientific research are limited. I understand that there must be some studies that are done correctly, but I don’t know which ones. I assume that these are the older studies that haven’t been refuted or retracted after a long time. So I end up trying to trust new research that doesn’t seem to stem from a conflict of interest… although it makes sense to me that things that researchers have some interest in, somehow, are the things that they study. It’s a very neat trap I find myself in! Nothing for it but to become intimately familiar with the methods of scientific research?
I hate how common it is when popular youtubers make their title something like about "debunking" but it's an hour long incoherent rant. But now that the video exists, you just share it and now the opposition has to watch an entire video of garbage and to counter you they have to basically transcribe and explain it.
Brandolini's Law existed 100s of years before the advent of the Internet.
One of my favorite "Mark Twain" quotes is:
“a lie can travel halfway around the world before the truth puts on its shoes”
The best part is that we actually have no proof that he said it proving Brandolini's Law way before the Internet was invented. In fact, this quote and variants of it have also been misattributed (perhaps not??) to other great quote misattribution folks like Gandhi, Churchill, and Einstein.
Interestingly, the amount of effort needed to get large language models (LLMs) to generate trustworthy information in a reliable manner often seems to be an order of magnitude bigger than the amount of effort needed to get them to generate bulls#!t.
Id like to propose Altman’s law: the amount of energy needed to refute bullshit generated by linguistic AI is two orders of magnitude higher than it takes to create it.
Anyways, at least some of this "misinformation" is quickly transmuted into questions in the form of a statement, prompting research, and rectifying with veracious information which... Yeah, Cunningham's law. It's the prodding, spurring nature of the way it's proposed that makes it so ingenious.
And I've also had PhDs produce woefully inaccurate answers to questions I've asked them... Offering complete deference, absolute trust... to anybody is absurd. The best modal, I think, is "I'll believe it when I see it." You really can't trust anyone otherwise.
This is one of those things that didn't exist before LLMs. Unlike AI agents, people usually form an informed opinion of things and don't hallucinate facts with great confidence.
Pretty astounding that all these people had LLMs back then and didn't release them. It's also crazy that the rabbi was talking with an LLM back then. The folk story precedes modern computers. How did he run inference? There are probably many such secrets in our ancient ancestors' pasts. Who knows how many h100s they had back then?
I watched some cable news channels once at a friend's. It appears they have already embraced the AI revolution since they had newscasters reading things that seemed unlikely and which I later confirmed to be untrue. The only conclusion is that LLMs hallucinated the text and AI-generated newscasters read it.
Humans would never make the mistakes these guys did. I think we should regulate this news that isn't factual.
I like the deadpan, but just to say it: when billionaires decide something is in their interest to have at least some people believe, they can find someone so unscrupulous that they will repeat the lie on air.
“Overwhelm your opponent by providing an excessive number of arguments with no regard for the accuracy or strength of those arguments.”
It is, in a way, fun to watch this in action and how often it’s employed, now that I know the name for it.