Hacker News new | past | comments | ask | show | jobs | submit login
Are Experts Real? (fantasticanachronism.com)
187 points by tlb 10 months ago | hide | past | favorite | 158 comments



This essay points out a pervasive problem that doesn't get much attention (aside from fear-mongering)

Because it's a topic so easily prone to hyperbole, it helps if we can reduce it down to a few premises.

- Science is science, but the practice of science is a social system. People get tenures, jobs, promoted, invitation to speak, etc. by how well they do in the judgment of their peers

- Political systems exist for political reasons, ie, if you don't have a reproducible, falsifiable test in a feedback loop somewhere, your career and goals are all about image. There's no alternative, either for you or your field of study (That doesn't mean other sciences aren't important or aren't really science.)

- If somebody appears on TV or in the media playing the role of an expert, it is, by definition, because they wanted to appear on TV or in the media. The goals of being popular and asked about important questions and the goals of whatever your field are ... two different things. The reporter doesn't care who figured out X. They care who can bring excitement and emotion to talking about X.

- All this leads directly to the problem of the average science news consumer: any communication you receive about science or research is the result of a lot of conflicting interests. "Being good science" is way down the list, sadly. I imagine the percentage of average readers who understand p-hacking is below 5%

- None of this is a secret. The average news consumer knows it, even if they're not able to articulate it

- Add in things like the reproducibility crisis, and we've got a huge problem as a species. We've got one flashlight to light our way in the darkness and it turns out there's a loose connection in there somewhere


> ...by definition, because they wanted to appear on TV or in the media

I suspect this misses some nuance; there is a meaningful difference between people who appear on the media because they want exposure, and people who do it because they want their field of study to be well represented.

For example there are people to accept interviews when they genuinely dislike the process, because they felt it was a duty that came along with their chair, etc. I doubt these are ever the most dynamic interviews, but they are common.

I agree this doesn't really extend to the level of the same old bobble heads pulled up on prime time news TV, but don't thing you can paint all of the media exposure with this brush.


I think that comes down to self-proclaimed experts vs experts as decided by their peers.

Many of the people I met who were considered experts and had a vast depth of knowledge never referred to themselves as experts and seemed do get embarrassed at the idea.

Whereas, i've met several self-proclaimed experts who liked to talk a lot about how great they were, but never really showed the depths of knowledge the other kind of experts showed.

At least from my experiences anyway in life.


Peer incentives and desires can also be skewed / wrong. Just because there's some kind of consensus doesn't mean the conclusion is correct.


Is it better to have the right framework but come to the wrong conclusion (for now), or is it better to reach the true conclusion with the wrong framework or basis? I guess it depends on timeline.


> I think that comes down to self-proclaimed experts vs experts as decided by their peers.

I think the correct criterion is neither of these: it's whether the expert can back up their expertise with models that have a good predictive track record. That's the real test of whether they actually know what they are talking about vs. just blowing smoke. Unfortunately, this criterion is hardly ever applied that I can see.


I see it as a spectrum, where to be an "expert" you need to both skilled in the field and a good communicator/politician.

The problem is, in those areas where there's not a tight feedback loop as mentioned in the article, it's possible to build your way to the top by just being somewhat skilful in the field and knowing how to appear confident, and how to (over)sell your work, and knowing when something is technically right but you should not say it because politics. And on your way to the top you're going to out-compete those that are very skilful on the field, but don't have such good communication abilities.


I agree with this. Let's say for example you are one of the world's leading immunologists when a global pandemic breaks out. Going on TV because you "by definition want to appear on TV" is tautological, you could say that about anything on the list. A leading expert may see that they have a responsibility help lead the public, it's perfectly possible that want for media attention for personal fame is completely irrelevant. Want for media attention to help guide the course of a pandemic to a safer path qualifies as "because they wanted to appear on TV," but the GP reads uncharitably.


I don't see how 'disliking the process' equates to 'not benefitting from the outcome'.

I dislike interviews, but I like having a job.


Well, since many of these "experts" are professors with tenure they can't really lose their job nor readily increase their salary. They might be able to more easily get funding for their research/students from nontraditional sources (e.g. industry grants), but many of them would not, because they value the estimation of their peers which might be (rightfully) hurt. Not all or even a majority even care about money or they wouldn't have chosen the profession. Some of this has changed with startup board positions, industry coalition funding, and academic promotional departments, but not quickly.

All of that goes out the door with pundits and shills, who don't just appear once, but have chosen the field for fame/fortune... and for those who have largely chosen to sell their research topics/results for profit. However, those are relatively easy to spot by most viewers and journalists who care.


There isn't any direct benefit for many of the people I had in mind.

Indirect benefits are harder to evaluate and will be nature involve a bunch of hand waving, but there is a huge difference between say holding a research chair, and having a consulting business as an "available expert".


> I suspect this misses some nuance; there is a meaningful difference between people who appear on the media because they want exposure, and people who do it because they want their field of study to be well represented.

I don't think there is a meaningful difference. First, I don't believe people can make choices in any kind of isolation. If you believe you are making a choice without your ego involved, you have simply deceived yourself. Congrats, you have a high functioning ego. Second, even if they could, audiences are not capable of concluding with any reasonable certainty which is motivating someone appearing on television. I don't mean that derisively, like people are stupid, I mean it is essentially beyond humans to do this.

IMO, in the absence of actionable information, it is pointless to speculate. Basically, you cannot know it so it's not worth thinking about.


I think I see your point - from the point of the viewer it's impossible to tell why someone has agreed to be interviewed.

I guess my hypothesis is that you get better information from people whose reason for agreeing to interviews is not self promotion. If true, that's actionable.


>- Science is science, but the practice of science is a social system

Well, there's no science - except if one is a Platonist (and believes in the concrete, and even real-er than real- existence of ideas).

There's just the practice of science.

And even if there was a science outside of its embodiment in practice, what would it be? Where are its iron laws written? There are 100,000 books on epistemology, the philosophy of science, methodology, and so on, but there's no concrete singular set of guidelines or tenets of practioners to uphold at every point (besides generalities like "experiment", "falsifiability", "process" etc).

And these things mean even less so in soft-sciences, from sociology to economics.

>The goals of being popular and asked about important questions and the goals of whatever your field are ... two different things.

It's not just the media goals that are at odds with your "pure" field's goals.

So are (or can be) the goals of your university administration, of your state, of your industry sponsors, of your vanity and/or greed, or your career advancement, of those that give grants, of the public opinion, of your personal ideologies, and so on...


> Well, there's no science - except if one is a Platonist (and believes in the concrete, and even real-er than real- existence of ideas).

There's just the practice of science.

I disagree.

There are scientific models that make predictions and can be judged by the accuracy of those predictions. That is true regardless of how those models came to be--who built them, what credentials they had, how their peers regarded them, or even what philosophical beliefs they had (Platonist or otherwise). The models and their predictions, and how accurate (or inaccurate) those predictions are, are all independent of all that. Those models are science separate from how science is practiced as a human social endeavor.

So I would say the answer to the article's title question is simple: If the experts are basing their expertise on models with a good predictive track record, then yes; otherwise no.


>There are scientific models that make predictions and can be judged by the accuracy of those predictions.

That's neither here nor there though.

Those models might be abstract (in the sense that they're not concrete object) but they are still an example of concrete practice of science -- they were thought and formulated into specific words, measurements and papers.

In other words, models come to life by the concrete practice of science by humans (with all its faults). They don't manifest by some abstract science.

>Those models are science separate from how science is practiced as a human social endeavor.

It's the opposite.

Those models were built not "separate from how science is practiced as a human social endeavor" but exactly by people practicing science as a human social endeavor. And they were shaped by exactly the things you say they were not: who built them, what credentials they had, their beliefs and imperfections (e.g. lack of statistics knowledge), etc.

For example, if the scientists (actual persons) creating a model is greedy, they can create a model that advances their financials (rather than science) by optimizing for grants or publicity or "fancy results" or sponsors, against accuracy.

One might still agree, sure, but the model can still be verified independently of those bias and personal faults etc.

But that would be wrong too. Because there's no abstract verification. Verification also happens by actual people, as an actual process, and prone to the same issues: slopiness, greed, shallow peers merely linking to a paper they don't understand well, ideological biases, simply not caring, and so on.

They're not the abstract idea of science.

>If the experts are basing their expertise on models with a good predictive track record

Even that is not some fool-proof criterium.

Because you can always either make the model vague enough or shape, manipulate, skew, explain away the subsequent measurements enough to get a "good predictive track record". It's much easier in science than it is in sports bets where a team either wins or doesn't.

See, the verification of those predictions also happens by humans (with all the above issues).


> That's neither here nor there though.

I disagree. I understand the issues you're raising, but there is a key difference as far as we, members of the public looking at science, are concerned. The process of building models can't really be critiqued by non-scientists; but the process of verifying the predictions of models can, because predictions, at least predictions that non-scientists care about, can always be boiled down to something that anyone can test. For example, if you want to test whether relativity works, just use the GPS on your phone: if the location it tells you you're at is correct, you've just verified predictions made by relativity.

It is true that scientists do not always boil down their predictions to such simple tests, but as members of the public, our response to that is easy as well: if a scientist can't boil down the predictions of their model to a simple test like that, don't believe the model. In other words, we as members of the public can give incentives with regard to predictions that we can't give with regard to model building, or more generally scientific research as a whole.

> you can always either make the model vague enough or shape, manipulate, skew, explain away the subsequent measurements enough to get a "good predictive track record".

You can only fake tests that aren't real tests anyway. You can't fake something like GPS working: either it consistently tells you where you are correctly, or it doesn't. So that's the kind of tests that should be required before members of the public accept the claims of any scientific model.


> There's just the practice of science.

Sure. But author doesn't state that there is ideal science (STEM) and science muddled with politics (social sciences).

Author says that there are good social institutions (like American judicial systems) and bad ones (like Russian judicial system), and that STEM is the case of the former one (although it has quite a lot of problems as well), while social sciences are rather a case of bad corrupted institution.

I think that author's idea of feedback loop is really interesting and reminds me of how things are going in other social institutions as well.

When we have feedback loops which incentives corruption, we get corrupted badly functioning institutions and vice versa. I think ``Why nations fail'' approach this from the same angle.


How you've described the social sciences being mudded with politics is the same in large parts of STEM, particularly the Technology and Education parts. Of course, as GP stated, everything is political so securing tenure, being invited to speak, etc, are political and social decision making processes that impact all aspects of STEM.


> How you've described the social sciences being mudded with politics is the same in large parts of STEM

Sure, as anything else. And I've explicitly stated that this is not the point.

The point is that politics leads to bad science in former case and good (relatively and for the most part) science in case of STEM, exactly like politics leads to great judicial system in US and terrible judicial system in Russia.

Because politics could lead to great checks and balances and good incentives, or it could lead to cronyism, corruption and regulatory capture, based on how the institution is constructed/designed, and its relations with other institutions.


> The average news consumer knows it, even if they're not able to articulate it.

Are you sure about this? Did you spend a lot of time interacting with average news consumers or is this based on other data?


What even is an “average news consumer”?


This is a fantastic comment, and spot on I think. I concur with everything you've said.


There's an implicit assumption here that someone who refuses to appear on TV when asked has a more valuable opinion than someone who accepts. This seems on its face to be false.


While I mostly agree with everything you've said

I truly believe

>...The average news consumer knows it, even if they're not able to articulate it

Is very far from the truth.


The solution is to #abolishcopyright. If you don't allow people to exchange ideas freely, and you allow entities to control monopolies on distributing information, you are going to have a vastly suboptimal flow of information.


> The solution is to #abolishcopyright.

That's the spirit! And abolish all privatized and monopolized knwoledge. Abolish Silicon Valley.

"Place Silicon Valley in its proper historical context and you see that, despite its mythology, it’s far from unique. Rather, it fits into a pattern of rapid technological change which has shaped recent centuries. In this case, advances in information technology have unleashed a wave of new capabilities. Just as the internal combustion engine and the growth of the railroads created Rockefeller, and the telecommunications boom created AT&T, this breakthrough enabled a few well-placed corporations to reap the rewards. By capitalising on network effects, early mover advantage, and near-zero marginal costs of production, they have positioned themselves as gateways to information, giving them the power to extract rent from every transaction.

Undergirding this state of affairs is a set of intellectual property rights explicitly designed to favour corporations. This system — the flip side of globalisation — is propagated by various trade agreements and global institutions at the behest of the nation states who benefit from it the most. It’s no accident that Silicon Valley is a uniquely American phenomenon; not only does it owe its success to the United States’ exceptionally high defence spending — the source of its research funding and foundational technological breakthroughs — that very military might is itself what implicitly secures the intellectual property regime.

Seen in that light, tech’s recent development begins to look rather different. Far from launching a new era of global prosperity, it has facilitated the further concentration of wealth and power. By virtue of their position as digital middlemen, Silicon Valley companies are able to extract vast amounts of capital from all over the world. The most salient example is Apple: recently crowned the world’s most valuable company, Apple rakes in enormous quarterly profits even as the Chinese workers who actually assemble its products are driven to suicide." [1]

[1] Wendy Liu, https://tribunemag.co.uk/2019/01/abolish-silicon-valley


Great article; thanks for the link.


> able to extract vast amounts of capital from all over the world

Wow. Because bringing workplaces to China made both Chinese and Americans worse off, you know? It would be much better if Chinese had continued work on rice fields and Americans would buy electronics assembled in US at much higher prices. To hell this globalization.

(sarcasm)

Such a "economics is a zero sum game mindset", ridiculous.


Yes, experts are real.

Unfortunately, experts are only expert in a few subjects, and experts (well... all humans even) are bad at knowing when they've left their field of expertise.

Linus Torvalds for example is clearly an expert programmer. And then suddenly he talks about AVX512 and I wonder if he knows anything about low-level programming or optimization. (https://www.realworldtech.com/forum/?threadid=193189&curpost...).

Case in point: with Rocket lake (on 14nm nonetheless), the power-limits associated with AVX512 have already been largely lifted. It didn't even take a few months before Linus's words became obsolete.

https://travisdowns.github.io/blog/2020/08/19/icl-avx512-fre...

100MHz difference single-core, 0MHz difference multicore. Virtually no penalty for using AVX512 on new systems. Linus was ultimately trapped in the Skylake-X way of thinking, and not looking forward to the next chips.


Expert opinion isn't really worth anything until it's been stress tested by other experts. I worked with experts in the context of patent and trade secret litigation. I pick people at the top of their field from prestigious universities with impeccable credentials.

If you ask them off the cuff questions they often are wrong. But if they investigate and think about it, they'll correct themselves.

Experts are also just as susceptible to bias, motivated reasoning, groupthink, etc. as anyone else.

This is why academic papers are supposed to be a back and forth discussion. "TRUST THE EXPERTS" has a plural for reason, and should be understood to mean trust the experts after they've done a thoughtful analysis with the best available evidence and have considered dissenting expert views.


I think term "expert opinion" here is key. We want the results of the work of experts, not their opinion.


Other experts should look at their results and their work.

Lay people should be concerned with their opinion, but only after other experts have looked at the results/work.


If you read his argument as less about the power virus effect and more about the uselessness of AVX512 for most general-purpose integer code, and accordingly about the opportunity cost of developing and fabricating AVX512 for clients who really will not use it, then, unfortunately, he's right.

The quote "I'd much rather see that transistor budget used on other things that are much more relevant," in particular, is the key to the argument. Because that seems to be basically exactly what the Apple M1 does, and look what people are saying about it.


But Linus is talking about power-budget. And Ice-lake / Rocket lake are proving that AVX512 is well within the power-budget of these chips.

M1 dedicates more chip-area to integer units, and also uses less power (thanks to the 5nm advantage). So what's going on with M1 isn't really a power-issue at all.

When Linus says things like:

> I want my power limits to be reached with regular integer code, not with some AVX512 power virus that takes away top frequency (because people ended up using it for memcpy!) and takes away cores (because those useless garbage units take up space).

Its clear that he was unable to be forward thinking. He was unable to see that a future chip, like Icelake or Rocket Lake, could fix the AVX512 power-throttling so dramatically. (without even going to an advanced node !!!).


> But Linus is talking about power-budget.

Linus covered a lot of ground in that message. He was correct on his history, whether that is history you'd like to remember or not. His transistor budget argument is very plausible, but without playing out an alternative history we can't know, and by we I mean you. At the time he wrote it he was correct on the power 'virus,' and I can't think of any good reason why he or anyone else owes Intel the benefit of the doubt that some future device would improve anything.

The AVX512 fanboi contingent will forever seethe about what Linus wrote. Meanwhile Apple and AMD are eating Intel's lunch with no 512 bit SIMD instruction units at all. Linus had it right and and he is looking more correct every day.


> The AVX512 fanboi contingent

Just so you know... I'm an AMD Threadripper fanboy, if anything. Since that's my personal computer. More importantly though, I'm a technologist. I experiment and play with many technologies, including SIMD compute, when I can.

Either way, I don't think flamebait comments like that are very helpful for discussion. But I'll try to focus on the crux of your argument instead...

> Meanwhile Apple and AMD are eating Intel's lunch with no 512 bit SIMD instruction units at all

Apple has 4x 128-bit Multiply-and-accumulate per-clock-tick on its M1. AMD has 2+2 multiply and add x 256-bit SIMD.

Those processors are actually very beefy in terms of SIMD-implementation.

There's also the Fujitsu A64Fx, which is an ARM-implementation of 512-bit SVE. In fact, 512-bit ARM-SVE is about to be widely-deployed on Neoverse.

SIMD-units are universally expanding across the industry. One of AMD Threadripper's biggest leaps forward from Zen1 -> Zen2 was the upgrade from 2+2 128-bit SIMD to 2+2 256-SIMD (doubling its throughput).

Intel is trying to stay ahead of the curve with 512, and they're really not that far ahead. Indeed: ARM's SVE is already caught up.

-------

NVidia's 32-wide SIMT is basically a 1024-bit implementation, while AMD's 64-wide SIMD is 2048-bit SIMD. With all of the "Deep Learning" going on, its actually a very important workload for a number of companies.

Linus is pretty much saying that Deep Learning / Tensorflow / etc. etc. isn't important. Which is... backwards thinking methinks. I think that deep learning is overhyped, but still I see the benefits of general SIMD compute in every-day coding situations.


You clearly are an expert on SIMD instructions, but your comments smell like you just want to hate on Linus.

For example, I think very few people would agree with: “And then suddenly he [Linus] talks about AVX512 and I wonder if he knows anything about low-level programming or optimization.”

And “But Linus is talking about power-budget”. No, you are hyper-focusing on that strawman, but his argument stands without that point. He is making a broader point based on his experience of Intel extensions over many years and he specifically alludes to that. He is saying he believes that AVX512 doesn’t add enough benefit for the costs.

You say “Linus is pretty much saying that Deep Learning / Tensorflow / etc. etc. isn't important.” but he says “I'd much rather see that transistor budget used on other things that are much more relevant. Even if it's still FP math (in the GPU, rather than AVX512)”.

You are the leaf on every comment on this thread, which is usually a bad sign of arguing a point regardless of the validity of what others say. As I read it, your arguments seem to argue about anything except for the core thesis of what Linus wrote, or you misrepresent his argument IMHO.

Edit: also, Linus is stating his opinion, an opinion which written loosely enough that you as an expert clearly can’t actually pin down as “wrong”. To make your point about the article, you would perhaps be better looking for a different example.


> but your comments smell like you just want to hate on Linus.

> No, you are hyper-focusing on that strawman

Others I've talked to have hyper-focused on that strawman. It isn't Linus I'm necessarily mad at, its his legion of fanboys who make crappy arguments based off of Linus's words.

Its not just the AVX512 example: people copy/paste his opinion on C++, Geekbench, and a whole slew of other issues constantly. Linus's arguments become memes throughout Reddit and are echo'd without analysis.

--------

But that's what happens when you put an "Expert" on a pedestal. People take their words, meme them out into new arguments, and then they don't necessarily apply anymore in the new situations.

That's my experience with "experts". The expert themselves are real, but the words they say can become warped through discussion to the point where they're meaningless.


AVX seems to me to be a band-aid to the instruction length problem inherent in x86 CISC systems vs. RISC chips like M1.

Then again, maybe the fact that the memory and GPU are all integrated into M1 also helps propel it over the top.

How well would an AMD / Intel SoC perform if you threw a 5950X / 10900K, 16 - 128GB of LPDDR4X-4133, and a GTX 3090 into a blender and baked up an enormous die with a huge watercooled solution?

I dunno. I want to believe its the architecture that's succeeding, not the SoC by virtue of being an SoC, but I don't have the knowledge or ability to judge that.


> AVX seems to me to be a band-aid to the instruction length problem inherent in x86 CISC systems vs. RISC chips like M1.

AVX is neither CISC nor RISC. Its SIMD.

We're talking about 1970s Cray-systems or 1980s Connection Machine (CM-2 or CM-5). Its a completely different paradigm of compute. An entire history of computing that runs parallel to the CISC vs RISC wars.

> Then again, maybe the fact that the memory and GPU are all integrated into M1 also helps propel it over the top.

M1 has an awesomely wide 128-bit x4 pipeline SIMD execution backend. Apple is making sure its SIMD unit (for NEON ARM instructions) doesn't fall dramatically behind.

M1 can perform 4x 128-bit Multiply-and-Accumulate instructions per clock tick. Its no slouch.

AVX512 is better than ARM Neon on M1, but even Apple sees the importance of SIMD compute.

The ARM world is talking about 512-bit SVE instructions. SIMD compute is obscure and niche: but its a parallel niche computation style with decades of history. Its the proper solution to a huge host of computational problems.


Wow, fascinating! Thank you for the reply. :)


It should be noted that SIMD-on-CPU is not to be the fastest SIMD in the world... but instead the closest SIMD to your data.

SIMD today largely exists in the realm of GPUs: the NVidia 3090 is a dedicated SIMD-architecture for example. But as an "external" device on a PCIe 4.0 port, your CPU can only stream 30GBps worth of data to it (the max-bandwidth of PCIe 4.0 x16)

The AVX512-units inside of an Intel chip is the same GPU-SIMD architecture, but smaller. More importantly, it shares L1 cache, and therefore has TBps worth of data-sharing.

As such, AVX512 becomes useful if you need a fast CPU to calculate some things, and then switch off to SIMD-paradigm for some reason.

---------

What's a good example of this "switchoff" ?? Well, Stockfish-nnue (now default in Stockfish 12) is a great example.

You might know that SIMD-compute, in particular GPUs, are the one of the best architectures for calculating neural nets. But GPUs are terrible with branchy-code (which happens often in chess: "if(bishop) then (calculate bishop moves)" is rather difficult to do efficiently on a GPU! Its hard to explain why, but its called "branch divergence". Look it up if you feel like it).

Anyway, a CPU is best at the "if(bishop)" kind of computation. But GPUs are best at neural nets.

Stockfish-nnue beats Leela-Zero, by using AVX512 on a small-and-lightweight neuralnet, but still uses the CPU for the fast if() statements.

This is only made possible because AVX512 and the rest of the CPU shares the same L1 / L2 cache, meaning you can get TBps worth of data-sharing, instead of GBs if you went "off-chip" to another computer (like calling a GPU-to calculate a neural net).

Furthermore, the NNUE neural net in Stockfish is composed of if-statements: only PART of the neural net updates, which means it can never be efficiently implemented on a GPU. Only something with fast if() statements can implement NNUE. (EDIT: Well... maybe not "never", but it'd be non-obvious to code up a solution. I think its possible, now that I think of it... but its not done in typical GPU code these days)

GPUs / classical SIMD compute want all of its data to update uniformly, without if() statements mucking up the logic. CPUs have giant branch-prediction and hyperthreading and other such features to speed up if-statements.

-----

A small SIMD for small SIMD-calculations, that combines with fast CPU-style branch-heavy code. A nightmare for GPUs to implement. Its something only possible with CPU + SIMD instruction assist.

------

GPUs are likely still best for large-scale SIMD. Like giant matrix multiplications (or many matrix-multiplications, like in video games). Where if-statements are less common.


What you say about performance on NV 3090 probably applies better to much older GPUs.

NV has put a great deal of recent work into getting their system to execute regular multi-threaded code with the same sort of speedup as they have had with SIMD workloads. It might be better to think of today's NV GPU as a super-threadripper when the data to be processed is already in GPU memory. The coder's job becomes making sure enough threads are ready to run to use up all the compute units at all times.


> NV has put a great deal of recent work into getting their system to execute regular multi-threaded code with the same sort of speedup as they have had with SIMD workloads.

I think you're talking about "Independent Thread Scheduling", which has been deployed since Volta.

https://docs.nvidia.com/cuda/volta-tuning-guide/index.html#s...

https://images.nvidia.com/content/volta-architecture/pdf/vol...

You're close... but wrong in a few key areas.

What NVidia has done is make it POSSIBLE to run multithreaded code without deadlocks. There's still the thread-divergence performance problem.

That is to say: you can now do more classical lock/unlock paradigms on NVidia hardware, but the code is serialized and becomes sequential. One 32-wide SIMD unit will execute only 1x thread at a time in that case. (That is: 1/32 utilization. The SM is designed to handle 32 SIMD threads but is only executing one of them at a time)

So you still have the thread-divergence performance problem. On older GPUs, such divergence would have been a DEADLOCK instead. So you've gone from 0% performance utilization to 3% utilization, which means your code actually will finish.

But it doesn't mean that your code actually has any degree of worthwhile performance. Indeed: CPUs have proven that time-and-time again, per-thread branch-prediction and prefetching is best for that kind of code.

If you have a great deal of divergent if/else statements in your code, you'll want to run on CPUs. NVidia GPUs can now handle those situations better (ie: without deadlocking if those if/else statements contain thread-synchronization statements), but they aren't actually high-performance in those use cases.


I am corrected.

I made the mistake of believing what the project manager said about it.


Did you read the article? The article is specifically talking about cases where experts are incorrect within their field of expertise and then cited as a source of truth in published papers.

The question of the article isn't really a boolean, it clearly shows some experts are real and some aren't and asks the more important question -- how do we discern when the institutionalized "expert" in an area is really an expert?


> Did you read the article? The article is specifically talking about cases where experts are incorrect within their field of expertise and then cited as a source of truth in published papers.

Yes.

But I still think that Linus is a good character to discuss within the scope of the article. Linus is clearly considered an expert, and many people follow his bombastic words (retweeting, or using them to form the basis of their argument around here).

"Because Linus said so" is a common way to end various arguments. Linus's opinion is truly "expert", people copy it without further analysis because... well... Linus kinda deserves it. He is an expert after all.

It doesn't mean Linus is always right.

--------

The "feedback loop" of the open source world is pretty quick. If you're a bad programmer, people figure it out when they start working with your code.

But the "feedback loop" of "chip ISA opinions" is slow. There's not really any way to prove, or disprove, whether or not AVX512 is a good or bad idea.

Linus's expertise in programming doesn't necessarily translate to chip ISA design. And as such, Linus can get details (expected power-characteristics of future chips) dramatically wrong.


I think you’re over-focusing on the AVX512 argument. You even said yourself it was a product release that happened months after he made the comments. Was he supposed to understand what changes Intel made to AVX512 before they announced it? Also, he’s not even talking about clock downscaling. He’s saying that AVX512 is the wrong way to get significantly more performance vs the amount of space and power it consumes (definitely true for most applications and 100% true for kernel work) and that Intel would be better served focused on general perf (which isn’t wrong given how they’ve been trounced by AMD and Apple).

Consider also that Linus worked at Transmetta for a relatively long time trying to build a CPU company. I would trust his opinions on chip ISA design more than most software developers.

You’ve basically taken his post out of context, misinterpreted it to be saying something he isn’t, and claiming it’s wrong because some details have changed on the ground after he made it.

This is the problem with expert opinion. It’s so easy to misinterpret. It’s also possible to get some details wrong without invalidating the larger point. Figuring out when the expert is wrong is hard, but claiming that Linus doesn’t understand chip design just factually doesn’t understand the context of Linus’ perspective and his actual professional experience.


> Also, he’s not even talking about clock downscaling

Quote the post in question:

> not with some AVX512 power virus that takes away top frequency (because people ended up using it for memcpy!)

This is pretty obviously talking about the down-clocking issue.

> You’ve basically taken his post out of context, misinterpreted it to be saying something he isn’t, and claiming it’s wrong because some details have changed on the ground after he made it.

AVX on Sandy Bridge was worthless: it wasn't any faster than SSE because Sandy Bridge was still 128-bit (even on AVX code).

Anyone who has followed ISA-development for decades would recognize that the first implementation of an ISA is often sub-par and sub-optimal. It wasn't until Haswell before AVX was faster than SSE in practice.

AVX512 was first implemented on Xeon Phi. Then it was ported over to servers in Skylake-X. AVX512 from there on remained a "1st generation" product for a few years (much like AVX on Sandy Bridge, before Haswell). From the history of these ISA-extensions, we generally can expect downclocking issues or inefficiencies in the first few implementations.

-----

In fact, some portions of AVX still remain suboptimal. vgather for example, isn't really much faster than standard load/stores. Maybe they'll be suboptimal forever, or maybe some hypothetical future CPU could optimize it.

Either way, I know that I shouldn't make an argument based on the "stupidity of vgather" instruction. There's an "obvious" optimization that could happen if that instruction becomes popular enough to deserve the transistors. At best, if I were to argue about vgather instructions, I'd preface it with "current implementations of vgather on Skylake-X", or similar wording.

CPUs change over time, much like programming languages or compilers. I expect a degree of forward-looking ability from experts, not for someone to get lost in the details of current implementations.


I don’t think I’ve ever met an engineer who’s a domain expert who talks like that. Unless you have actual specific information about future plans and reason to believe in them, expecting everyone to qualify their prognostication with “this generation but it might change in some hypothetical future” is just not useful. That’s generally true of all technical topics. What’s stated today can change in the future. These aren’t physical limits being discussed.

Finally, Linus’ main advice is to stop focusing on AVX and focus on architectural improvements. It took an enormous amount of engineers to implement AVX512 and then even more to get rid of the power hit. I trust Linus (and back that up with my own experience and observations in the industry) that Intel would have been far better served focusing on architectural improvements like Apple did. It’s a blind spot Intel has because they know how to bump clocks and create x86 extensions but they don’t trust architectural improvements to deliver massive gains. Hell, this isn’t the first time. Netburst (which was their attempt) screwed up so badly they had to return to the P3 architecture with the Core line. Intel has the engineers but their management structure must be so bloated an inefficient to fail with such regularity.


> I don’t think I’ve ever met an engineer who’s a domain expert who talks like that. Unless you have actual specific information about future plans and reason to believe in them, expecting everyone to qualify their prognostication with “this generation but it might change in some hypothetical future” is just not useful. That’s generally true of all technical topics. What’s stated today can change in the future. These aren’t physical limits being discussed.

If I, 5-years or 10-years ago, claimed ARM sucks because its in-order, then I'd be making a similar mistake as Linus did. There's nothing about the ARM-ISA that forced in-order execution, it was just the norm (because back then: it was considered more power-efficient to make in-order chips).

There's nothing part of AVX512 that says "You must use more power than other instructions". That was just how it was implemented in Skylake-X. And lo-and-behold, one microarchitecture later, the power-issue is solved and a non-issue.

Similarly, ARM chips over the past decade switched to out-of-order style for higher performance. Anyone who was claiming that ARM was innately in-order would have been proven wrong.

----------

Maybe it takes experience to see the ebbs and flows of the processor world. But... that's what I'd expect an "expert" to be. Someone who can kind of look forward through that.

I mean, I've watched NVidia GPUs go superscalar (executing INT + FP simultaneously). I've watched ARM go from largely in-order microcontrollers into out-of-order laptop-class processors. I've watched instruction sets come and go with the winds.

------

If that's not good enough for you, then remember that Centaur implemented AVX512 WITHOUT power-scaling issues back in 2019, still predating Linus's rant by months. A chip did exist without AVX512 power throttling, and he still ranted in that manner.


Nowhere did Linus say that AVX512 being a power sink is an unavoidable consequence and should be dropped for that reason. He just said "AVX512 sucks right now - they should spend their resources elsewhere because the ROI isn't there". When he made that statement he was factually correct - Intel had spent a lot of resources bringing AVX512 to market, the first version is shit (making correct selection of turning on AVX512 support tricky), & they had to spend even more correcting it.

If you had said 10 years ago "ARM sucks because it's in order" you wouldn't be factually wrong on that front in that in-order sucks for performance, but you would be factually wrong because the first iPhone was already partially out-of-order and 10 years ago you'd be looking at the iPhone 4 which was a Cortex-A8 superscalar design. Also in-order is fantastic for battery - that's why M4 & below are still in-order. Note that me saying this doesn't imply that will remain the case for all time, which is how you seem to be interpreting statements like that which would be an extremely poor interpretation on the part of the reader.


Linus's expertise in programming OS kernels in C doesn't translate very far. It went far enough to make a fast implementation of a distributed version control system designed by somebody else, but not far enough to make an accessible command-line interface for it, or a reliable storage engine.

(TIL that my git repository at work is corrupt. Again. Second time in two years. "git fsck" thinks it's fine. "git repair" spins forever. Thinking hard about Fossil just now...)

We might reasonably guess that he is dead wrong about the merits of C++ as an implementation language. Observationally, he would have plenty of company.


Surprised that the classic "Conditions for intuitive expertise: A failure to disagree." paper by Daniel Kahneman and Gary Klein is not referenced

https://psycnet.apa.org/record/2009-13007-001

My takeaway: without short feedback loops it is hard to build expertise. If it takes 2 years to learn the individual you paroled reoffended, you can't learn (so there are virtually no expertise in parole officers or university admissions). In these scenarios, simple linear models outperform "experts" by a wide margin.

See: "In Praise of Epistemic Irresponsibility: How Lazy and Ignorant Can You Be?" by Michael Bishop

https://philpapers.org/rec/BISIPO


One of my problems with this line of argument is that it provides a sort relativism of opinion, every opinion is just as valid as any other. And then you're at post-truth and post-fact sadly...

If Carlsen is a fake, so is every chess player he has ever played, the whole system? Epidemiologists lied about masks (or they really didn't know better 3-5 months into the pandemic, also not good), discount everything else said? I've heard these arguments often lately :/

Experts not being "real" implies that there's no absolute expertise whatsoever and implicitly legitimizes dismissing any argument whatsoever on a single failure and allows people to believe that any opinion (including their own) is just as valid. I think this is in part to blame for our "post-fact/post-truth" phenomenon. Many people decide experts or government were wrong/have lied before and can no longer be trusted. And when an outsider comes along with contrarian but convenient ideas, why not believe them, nobody else is to be trusted after all and every opinion is valid.

Any expert opinion can be wrong but the prior is that an expert opinion is much more likely to be correct than your own, or a politician's, or a blogger's.

NB: I want to note that the author's point is much more nuanced and (more importantly!) narrow, but it's not evident from the title or a superficial reading.


"Epidemiologists lied about masks (or they really didn't know better 3-5 months into the pandemic, also not good)..."

Technically, if my timeline is right[1], about 2 months into the pandemic, for his example. 3-4 months generally. Plus they had a prior belief that masks wouldn't be useful.

[1] The world outside China learned about the thing at the end of December; his example is the first week of March.


'Experts' often disagree. Which makes you wonder if the whole system is actually invalid. Experts are merely one data point, at the end of the day you have to look at the entirety of the data yourself to make a decision.


I think this argument conflates two things, one of which is true and slightly terrifying. The terrifying thing is how few scientists understand the statistical underpinnings of their studies. The conflation happens because he thinks this is stratified by field. In practice, there are plenty of sociologists with a decent grasp on this, and a truly astonishing number of biologists who don't. Don't believe me? Read the studies of biology papers and how many have serious problems in their statistical treatment.

So we've got a serious problem with how science is being done. Probably doesn't affect whether or not expertise exists, or which fields it exists in. Unless we're going to start saying biology isn't real.


Perhaps a possible counterpoint is to argue that fancy stats knowledge is not really necessary for biologists. The reason they have to do it in their papers may be more due to bureaucracy / cargo cult / numerical theater etc. Look at the scatterplot etc. If the details of the fancy evaluation make or break the discovery, then it's probably a fluke anyway. Yeah, most papers are uninteresting and are purely done for funding and CV reasons. Those who pay expect papers, but aren't really sure why. Certainly not to read them themselves.


> Unless we're going to start saying biology isn't real.

It probably helps to recognize that statisticians caught one of the first modern biology researchers fudging his numbers: https://en.wikipedia.org/wiki/Gregor_Mendel#Mendelian_parado...


Except they didn't; they alleged that the numbers had been fudged on rather scant evidence.


As I understand it, Fisher had two complaints. First, that the trifactorial data was just wildly too accurate (using a chi-squared approach). This seems to be resolved in the scientific community as an off-by-one error: if instead of 10 seeds, Gregor had used 11 on occasion, that's enough to resolve the issue.

The other issue is the chi-squared analysis of Mendel's experiments in aggregate, which when pooled also too neatly fit the pattern he wanted to illustrate. And as best I know, nobody has adaquately disputed that analysis.


The only actual analysis I saw came up with a p-value of about 0.07, which is just a complete nothingburger. There's no explanation for that because there's nothing to explain.


On March 4, 2020, Michelle Odden tweeted about 800 epidemiologists not wearing masks.

Because I don't commute anymore, I'm way behind in my podcasts. Three podcasts I listen to, and am behind in, are the Naked Scientists (a British popular science broadcast), and the Nature and Science podcasts (which cover weekly scientific news). The interesting thing is that, as I listen to them now, I'm going through a live view of the past with present hindsight. It's fun.

One interesting thing is that aerosol, airborne transmission of many viruses was (and is?) controversial, or at least an open research question. (Recognition of aerosol transmission of infectious agents: a commentary [Jan 2019] https://bmcinfectdis.biomedcentral.com/articles/10.1186/s128...)

Up until at least mid-April, the default belief of the medical community seems to have been that Corona viruses were not, or at least not particularly, airborne. Instead it would be spread by physical contact (contaminated surfaces, etc.) or large droplets produced by coughing or sneezing. As a result, the default use of masks was not particularly recommended.[1] (And that's before you consider the scarcity of PPE for medical users at the time.)

After about mid-April, the facts on the ground seem to change, and things like aerosol transmission and asymptomatic spreaders suddenly become a much larger concern. (And a lot of people on my podcasts got sheepish with the egg on their faces.)

As a result, it's not particularly surprising that an epidemiologist would not consider wearing a mask to be particularly useful in early March.

[1] One thing the article does not touch on is the tendency of actual experts to get feelings akin to religious hysteria based on internal fights, with the resulting dramatic statements spilling over into outside conversations. (I had this one paper ripped to shreds by a old-time network guy because I and my advisor had the temerity to consider keepalives/heartbeats.) I suspect there may be something behind the default assumption up until April that masks were bad.


I think stats is the culprit. Or lack of it. If there's one area of study that ties together observations and models, it's stats. But stats suffers from the same problem that programming does: people think it's a tool that they just need to know a little bit of, so they don't defer to the proper authorities in the area.

Of course this is just anecdotes, but I do have a friend who did stats as a phd and this is the attitude that seems to be common: stats (and code) are minor tools that won't affect the outcome of our experiment.


Some ppl who studied social sciences might even be uninterested in statistics and look at it as off topic.

After all, most people aren't good at mathematics (i think) and someone who, say, chose to study psychology might have preferred that there wasn't any math stats courses in the program at all.

(I know one such person in real life)


I think experts exist, but there's a problem with translating expert opinion in evolving fields into actionable advice that can be communicated to the masses through the news, and that can withstand aggressive, organized skeptical attack.

Okay, that's a bit of an exaggeration, just because it rolls a whole bunch of different things into one blanket statement, and sounds like a rant, which it kind of is.

The world is fucked.

What's the point of an organized skeptical attack on experts? I think I can attribute to Imre Lakatos the thought that if you can completely discredit the experts, and make it impossible to decide on facts, then you create a power vacuum that can only be filled by those who are capable of acting in the absence of facts, such as thugs and tyrants. You have effectively censored the experts with greater efficiency than any government censorship program could ever dream of. I believe this is the precise reason for "the war on the truth."


A lot of the points are interesting, but I couldn't help but get hung up on the link to the covid analyist who supposedly beat the experts, at https://www.maby.app/covid/

A couple of immediate red flags: the predictions stop in early May, and there appears to be no weighting of how "important" a given prediction is. Perhaps the former is just because a contest ended at that time. For the latter, of course, importance is subjective, but ranking these two as equivalent is silly:

  * How many cases per day will Washington state average for the week ending June 7? (Tom: good prediction, Experts: bad)
  * How many US COVID deaths will occur in 2020? (Tom: bad prediction, Experts: good)


Prediction: in 10 years no "expert" will not be putting their work in Git. Git provides a zero cost (in time and money) way to share your full work with audibility; where fixing mistakes continuously is expected and not something to be embarrassed about; where people can instantly clone to reproduce and test better ideas upon your work; where collaborating with proper attribution is done automatically, et cetera.

Look in any field—epidemiology, computer science, physics, chemistry, medicine, et cetera. If the leading experts aren't using git yet, look for the ones that are. They will be the leading experts of tomorrow.


I like that prediction, but git is actually a bit of a step back from older more traditional source control systems. Rewriting history is far more common with git to the point where teams have to make explicit policies on when that is allowed.

Even if git becomes common in science outside of code, I don't think it would overcome the social pressure to appear infallible.


> Rewriting history is far more common with git

How do you mean? It's impossible. You git push something, and then you leave an immutable trail that someone somewhere may have cloned so even if you were to force push or delete you can't erase it. Assuming it was of any importance and someone was watching (but if not of any importance, then no big deal to force push/delete).


Saying something is immutable because someone may have saved an old copy before you mutated your own copy sounds very fragile to my ears.


If you push to GitHub it won't delete it


If you force push to github, the original commit (if not tagged or given a branch name) is absolutely de-referenced and may not be recoverable.


Scientists are the new priest class. They deliver edicts and answer the public's questions about the universe. The problem is that the answer to 99% of the public's questions is "I don't know" or "It's very complicated." We don't like that. We want the right answer, right now, delivered in one simple sentence. We find the one goober who hasn't walked inside of a lab in decades who wants to feel important to give us what we want.

Worse is that people envision scientists as unbiased saints. That's just not real. Real science is about one thing and one thing only. Money, aka grants. Grants get you labs and underlings which bring more money and maybe someday tenure. And once you have tenure, you can finally publish what you really think about what interests you instead of chasing funding fads. Except that part never happens.

That means you study the subjects that give funding. You give papers that increase the chance of more funding for more papers. Your accuracy? Doesn't matter. Funding matters. No one is going to check your work anyhow beyond simple peer review. Someone tries to recreate your work or recalculate your numbers? Tell them you lost the data. It's a bit embarrassing but no one is going to call you on this stuff anyhow.

There is only one solution to the replication crisis. Funded third-party replication for all experiments. Money is the root of this problem. It has to be the solution. How many scientists are going to publish "mistakes" if they know that four different teams at four other universities are going to automatically attempt replication?


Many people only interface with "science" through school classes and popular science magazines / documentaries / IFL science / antitheist memes, arguing against hoaxes, flat Earth, creationism etc.

People dislike your answer because they have no idea how the sausage is made, when actually your description is quite realistic.


> Real science is about one thing and one thing only. Money, aka grants

no.


If you think that real research isn't dependent on funding, then I really don't know what to tell you.


Regardless of the quality of knowledge, people with better social and networking skills will become better known than those without.


Among the general public? Definitely.

Social and networking skills also matter within your professional community but they are not a substitute for publishing in top journals/conferences.


Perhaps your experience is different. Having the right name as the first or last author helps a lot in peer review and publication. Even in “double blind” reviews, it’s easy to find the authors usually.


Sort of true, but not quite. You may infer that a paper is from a certain lab or that the first author may be someone specific as it builds on their work... But last author? Not as easy as supervisors/lab leads can change.


Its not the supervisors who become famous. Its the lead Professor of the lab. Usually in engineering papers/journals the first/last author is the lead professor of the lab, who does nothing more than get the funding for the project.


> "This year, we’ve come to better appreciate the fallibility and shortcomings of numerous well-established institutions (“masks don’t work”)… while simultaneously entrenching more heavily mechanisms that assume their correctness (“removing COVID misinformation”)."

I suspect in a cultural moment of mass-skepticism of even valid systems (e.g. masks/election/vaccines/wikipedia) it's very hard to touch on the fallibility of those systems without putting people on the defensive. It shouldn't be in an ideal world -- making these things more auditable and rigorous only makes them less prone to questioning in the long term, but I suppose many feel faith in the system is more important at this moment than incremental improvement.

[However this really doesn't answer the n=59 conundrum, which I think requires a different lens]


I want to share this tid-bit. When I studied portfolio management, I had to learn not just how to talk to people about their risk tolerances, but how to ascertain whether they knew their own tolerances. The concept was called "calibration" - or - how well is someone able to know that they are wrong/don't know something. Basically, "high calibration" is the opposite of Dunning-Kruger in my mind.

The text cited a study that measured calibration by profession. The highest calibrated profession was meteorologists. It makes sense if you think about it: I prognosticate rain tomorrow, tomorrow comes and it's sunny - I am forced to confront the fact that my prognosis was wrong. So not only is there a way to know who's better and who's worse, but also constant reminder that "I can be wrong despite my best work"

The lowest calibration profession was doctors. Which also makes sense. Imagine I am a doctor and I miss an early sign of a disease. The patient dies 5 years later - I may not even know, and even if I do, very unlikely am I confronted with the specific point in time 5 years prior where I made the wrong call.

So the thing that matters with expertise is - how often is the person getting feedback. In the article they cite chess as having "real experts" - of course. Because a chess player gets feedback on how good they are every time they win or lose a match.

On the flip side, how often is someone in a very soft field getting feedback? What's the way for an "expert" in say Gender Studies, to wake up in the morning and be confronted with "the thing I did yesterday didn't work?"

The other thing about experts is they are optimizing for a single thing. Eg ask your compliance person at work if you can do something, the answer is "NO" because they are optimizing for not being sued, but in reality that's not the only concern. Or in the pandemic, ask the economist if we should lock down - they'll say no because they understand the financial consequences. Ask a medical person and they'll say yes because they are focused solely on the number of sick people. In reality the world is complex and there are tradeoffs.

Related, experts are often concerned with high-impact issues which may be different from individual issues. EG: in the USA, by default, kids get the Hep B vaccine on their first day of life. On country level this makes sense because there are a lot of mothers out there who have Hep B and don't know it. But when my wife was giving birth, we knew to opt out despite the recommendation because we understood that the experts are optimizing for a thing that doesn't relate to us.


Any chance you can recall enough of that paper to find a citation? I'd be interested to read such a paper.


Agreed, that'd be interesting


> The lowest calibration profession was doctors

I find it annoying that the health care system seems uninterested in feedback. It's just treatments per hour it seems, but does it work or not? Who cares. (I'm in western/northern Europe)


> Basically, "high calibration" is the opposite of Dunning-Kruger in my mind

Dunning-Kruger is perfect “relative calibration” somewhere near the 70th percentile, as the finding of the study for which the effect is named is essentially that people above and below that point assess their relative performance closer to it than it actually is.

It's doesn't seem to address calibration in the absolute sense at all.


Experts do exist but I think many alleged experts often give into the temptation to use the expertise in their field to represent expertise in a field in which they cannot claim the same level of expertise.

This is my opinion because I’ve experienced the temptation to do just that many times and I’ve witnessed instances where that has happened as well.

I don’t think this behavior is limited to people who have expertise in any field but it might be most damaging when experts do commit that logical fallacy.


One of my theories based on life experience is that many fields do not attract the brightest people to begin with, and consequently many of the ideas from these fields are not pressure-tested enough.

In todays academia, it is basically impossible to fail getting a PhD in many fields provided you are ok being miserable for years at a time. At the same time academia is filled with a ton of bs like publicly funded research being released behind paywalls, professors receiving grants for projects that are already completed, students being denied graduation util their project can be submitted to the highest impact journal possible etc, having to do a long postdoc to get a tenure track position etc...

For smart people that value financial freedom, academia is a terrible option at this point in time.

The current coronavirus fiasco has laid bare the lack of critical thinking in the life-sciences fields.


"One of my theories based on life experience is that many fields do not attract the brightest people to begin with, and consequently many of the ideas from these fields are not pressure-tested enough." very well articulated, i have thought about this many times, basically a lot of physical sciences face this


My main problem with experts is they share the same cognitive biases as the rest of us: https://www.karllhughes.com/posts/experts

It's great to have academic qualifications, but you still have such a limited sphere of experience and knowledge - even more so the higher you go in academia.


one of the problems of experts is that at the edge of their field of expertise, they aren't experts of the edge. It's just they have developed expertise in the building blocks to allow them to live at the edge. Quite often we are interested in things happening at the edge, and the "experts' can't really provide information at high levels of certainty.


IMO Experts are more wise to the problems of their domain than intelligent. Intelligence relates to the execution of mental tasks and the recall of specific facts. Wisdom relates to the planning of the approach to solve a given task.

It is whether or not the nature of the problem lends itself to expertise that defines if experts are particularly relevant for a given domain.


I'd say that experts are mostly knowledgeable, experienced, and broadly respected within their scientific domain. I don't see how intelligence or wisdom are important factors - other than for the fact that most accomplished scientists are probably rather intelligent.


Shout out to Hexo! I don't see it on here often and it's a great static blog generation tool.


The intelligence of a neural network is in the edges. And I think partially, the same goes for social networks.

An armchair expert can know better and be more extensively read than a credentialed expert. But as people will not trust (=edge) him as much, he will still be less useful to the group.


The Masks Fiasco of 2020 was indeed a noteworthy example of experts-gone-wild, but my favorite is still the Monty Hall problem [0], because unlike epidemiology where proof stems from evidence-based experimentation, the Monty Hall fiasco saw a wide range of experts in mathematics/statistics—a field where truth does not require experimentation but only logic-based proof—loudly and publicly denigrate a person with no expertise in their field, only to be wrong.

But I think the an even more interesting question that this post does not cover is the role experts play in our culture. Neil Postman discusses experts at length in Technopoly: The Surrender of Culture to Technology (1992), wherein he characterizes them as basically the shaman-priests who control the Overton window [1] in a world where morality and social acceptability no longer stem from things like religion:

> ...expertise is a second important technical means by which Technopoly strives furiously to control information. There have, of course, always been experts, even in tool-using cultures. The pyramids, Roman roads, the Strasbourg Cathedral, could hardly have been built without experts. But the expert in Technopoly has two characteristics that distinguish him or her from experts of the past. First, Technopoly’s experts tend to be ignorant about any matter not directly related to their specialized area. The average psychotherapist, for example, barely has even superficial knowledge of literature, philosophy, social history, art, religion, and biology, and is not expected to have such knowledge. Second, like bureaucracy itself (with which an expert may or may not be connected), Technopoly’s experts claim dominion not only over technical matters but also over social, psychological, and moral affairs. In the United States, we have experts in how to raise children, how to educate them, how to be lovable, how to make love, how to influence people, how to make friends. There is no aspect of human relations that has not been technicalized and therefore relegated to the control of experts.

...

> The role of the expert is to concentrate on one field of knowledge, sift through all that is available, eliminate that which has no bearing on a problem, and use what is left to assist in solving a problem. This process works fairly well in situations where only a technical solution is required and there is no conflict with human purposes—for example, in space rocketry or the construction of a sewer system. It works less well in situations where technical requirements may conflict with human purposes, as in medicine or architecture. And it is disastrous when applied to situations that cannot be solved by technical means and where efficiency is usually irrelevant, such as in education, law, family life, and problems of personal maladjustment. I assume I do not need to convince the reader that there are no experts—there can be no experts—in child-rearing and lovemaking and friend-making. All of this is a figment of the Technopolist’s imagination, made plausible by the use of technical machinery, without which the expert would be totally disarmed and exposed as an intruder and an ignoramus.

> Technical machinery is essential to both the bureaucrat and the expert, and may be regarded as a third mechanism of information control. I do not have in mind such “hard” technologies as the computer—which must, in any case, be treated separately, since it embodies all that Technopoly stands for. I have in mind “softer” technologies such as IQ tests, SATs, standardized forms, taxonomies, and opinion polls. Some of these I discuss in detail in chapter eight, “Invisible Technologies,” but I mention them here because their role in reducing the types and quantity of information admitted to a system often goes unnoticed, and therefore their role in redefining traditional concepts also goes unnoticed. There is, for example, no test that can measure a person’s intelligence. Intelligence is a general term used to denote one’s capacity to solve real-life problems in a variety of novel contexts. It is acknowledged by everyone except experts that each person varies greatly in such capacities, from consistently effective to consistently ineffective, depending on the kinds of problems requiring solution. If, however, we are made to believe that a test can reveal precisely the quantity of intelligence a person has, then, for all institutional purposes, a score on a test becomes his or her intelligence. The test transforms an abstract and multifaceted meaning into a technical and exact term that leaves out everything of importance. One might even say that an intelligence test is a tale told by an expert, signifying nothing. Nonetheless, the expert relies on our believing in the reality of technical machinery, which means we will reify the answers generated by the machinery. We come to believe that our score is our intelligence, or our capacity for creativity or love or pain. We come to believe that the results of opinion polls are what people believe, as if our beliefs can be encapsulated in such sentences as “I approve” and “I disapprove."

[0] https://en.wikipedia.org/wiki/Monty_Hall_problem#Vos_Savant_...

[1] https://en.wikipedia.org/wiki/Overton_window


I'm not grasping why if one professional chess player is a fraud, all else are?

Would the assumption be that the chess fraud was receiving move instructions from someone else? Again, not sure how that would philosophically impugn all other chess masters.


If the top chess master is a fraud everyone else who respected him and his expertise is also either a fraud or was deluded by a fraud. If you can’t tell who’s a fraud in your field when they have the kind of scrutiny being no. 1 invites there’s no there there.


The example of magnetohydrodynamicists vs astrophysicists doesn't seem entirely idle.


This reminds me of my layperson puzzlement:

Can we apply and verify the results from CERN experiments beyond experimental settings? If we can't, how can we know the results from CERN experiments aren't just artifacts of the experimental apparatuses?


That's a good question. You're right that a random grad student in a lab can't replicate, e.g., the discovery of the Higgs boson!

However, at least for the Higgs (and I believe this is standard for most such large experiments), there were actually two different groups using two different apparatuses, both at CERN.

There was the CMS[0] detector and ATLAS[1] detector, used and analyzed by two different groups of physicists, resulting in two papers[2,3], published on the same day.

[0] https://en.wikipedia.org/wiki/Compact_Muon_Solenoid [1] https://en.wikipedia.org/wiki/ATLAS_experiment [2] https://arxiv.org/abs/1207.7235 [3] https://arxiv.org/abs/1207.7214


Thanks!

Pity that I guess I won't be able to understand the papers linked in the foreseeable future.


Well, I’m an expert and as far as I know I’m real, so my answer is yes.


"Are experts real?"

Yes. They are also sometimes wrong.

Also sometimes a person fakes expertise.

Also sometimes an outsider to a field makes a useful discovery or insight the experts missed.

None of the above preclude the existence of actual experts.


Experts are real, unfortunately knowledge in many fields is built with very poor standards of quality. The experts in those fields are vastly knowledgeable in falsehoods.

While the general public questions if we should be funding the next big project in physics, the real question we should be asking is if we should be funding almost anything else as long as they are not willing to apply the same levels of rigor.


That's right, unfortunately most of science has been debunked ie replication crisis. Sadly, the populace doesn't see it that way.


Whoa - [citation needed] on "Most of science has been debunked"

We've learned to be much more skeptical, and the previous standards were scandalous, but there's a difference between "too much of" and "most of"


Here's a start, almost every single field is implicated: https://en.wikipedia.org/wiki/Replication_crisis

Economics, social sciences, even medicine.


Social psychology has a replication crisis, because they had a culture of never questioning a published result. You can see that the vast majority of that article is devoted to problems within psychology. (Medicine also seems to have problems.)

In economics 1/3 of 18 studies failed to replicate? A study that small, what are the odds that study itself would replicate? Note that this claim is restricted to experimental studies, which are usually testing human decision-making on economic questions. This is the field of economics closest to social psychology. It seems plausible to me that the model of "get a bunch of people in a room and ask them some questions" is not a reliable methodology.

Economics as a whole may have a problem with replication, but we can't tell from the information on that page. There are whole fields that aren't even mentioned -- no physics, no chemistry.


And don't forget machine learning!

If your paper in ML doesn't provide code, I assume that it's not reproducible and the authors made a conscious decision to not release the code...


That is not most of science. There are more biochemists than all those fields combined.

The fields affected have three properties: they deal with complex entities, they are interested in phenomena that is generally investigated using statistical methods and they deal with noisy data. For those fields, we need new methodologies, but other sciences do not have a comparable problem.


Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124

https://doi.org/10.1371/journal.pmed.0020124


On the other hand, here's a hilarious response how Ioannidis is wrong in his claim from one of the most well-known replication crisis blogs: https://replicationindex.com/2020/12/24/ioannidis-is-wrong-m...


Why do you suppose they chose to publish this hit piece as a blog post instead of going through the peer review process with a real journal article? Or at least a formal letter to PLoS Medicine?


I'm not sure. Maybe they did both and the real article is somethere in peer review pipeline. (Do you imply that no posts on replicationindex should be trusted since they're not properly peer reviewed?)

One possible reason is that blog posts are more widely shared on social media, and I agree that this is kind of a hit piece, mostly as a reaction to Ioannidis' contrarian stance on COVID-19, so publicity was probably an objective here.


We accept that any complex software, even well-designed and well-tested, can malfunction -- often spectacularly. What, then, can we expect from humans? Consider that virtually no effort went into the design; and the test framework is horrific, relying primarily on trial and error.

The natural outcome is that any human "expert" is no more reliable than your average startup codebase. In fact, it's a wonder that human societies manage to organize themselves into anything resembling civilization.


From an economic perspective there's a relatively simple solution: requiring experts to wager a certain amount of their own money on a measurable outcome of their prediction when making public policy recommendations. If somebody's not confident enough in their own prediction to put their own money on it, then that prediction shouldn't be used as the basis for policies that can affect millions of peoples' lives.


Science is not about being right every time. Stating a hypothesis publicly as you're experimenting to determine it's validity shouldn't require a wager.


To be fair, betting isn't about being right every time either. To really make bank on a site like DraftKings you need to be good at the meta strategy, not just good at predicting sports outcomes.

Obviously it's not a good idea to require a bet to make a public statement, but I also wouldn't mind seeing scientists be held more accountable for important decisions. How often are study sections evaluated after the fact, for example?


If we're talking public policy decisions, I don't think scientists should be punished when acting on the best available, but incomplete data, especially when two equally reasonable experts within the field can disagree on its interpretation.


I remember this concept being mentioned elsewhere as "friendly" and "hostile" learning environments


Epstein's excellent book Range uses the terms, though I think they don't originate there.


To be fair, the CalPERS board the hired Meng is absolutely corrupt _and_ inept on top of it. At least Meng is now gone. If you want to read about some even more absurd missteps that he made while he was CIO, see Naked Capitalism. The archives have the full stories but the link below has the highlights: self-dealing, lying, misrepresenting his credentials as an investor, and oh yeah, cancelling a massive hedge against market downturns right before coronavirus hit because he didn't understand how hedges work

https://www.nakedcapitalism.com/2020/08/calpers-chief-invest...

Honestly it was the least convincing part of the blogpost's overall argument -- in any large-enough pool of people of course there will be a few charlatans and frauds.


Teller attributes Bohr as saying: "An expert is a person who has found out by his own painful experience all the mistakes that one can make in a very narrow field."

I prefer something closer to "An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them". 'Cause even experts make mistakes in their own field.

This piece seems to think an expert cannot make a mistake in a given field of expertise, which is "utter nonsense, of course."

I don't like this example: "If you look at the papers citing Viechtbauer et al., you will find dozens of them simply using N=59, regardless of the problem they're studying"

Paging through the linked-to Google Scholar search finds 90 results. Of the first 12 with readable text, only two seem correctly described that way (and max 3), suggesting under two dozen examples. A spot check of the remainder suggests that pattern holds. I'll show my work:

#1 says "A sample size of 30 patients per group was chosen as recommended for pilot studies to achieve an appropriate level of statistical power [30]." They selected 60, and "1 patient was lost to follow-up" giving 59. So this wasn't N=59 but N=60. - https://medinform.jmir.org/2019/4/e14044/?utm_source=TrendMD... . 30 was chosen because "It corresponds to the detection of a potential difference of 21% between groups for a power of 80% and an alpha significance level of 5%." which is not the default example from Viechtbauer.

#2 does not use N=59. The number 59 occurs because: "The study included 16 patients with diabetes (59 ± 8.8 years; 8 men and 8 women)" - https://www.sciencedirect.com/science/article/pii/S095825921...

#3 uses N=59, though I don't know if it was used incorrectly because I can only read the abstract and citation list - https://journals.sagepub.com/doi/pdf/10.1177/0306624X1985848...

#4 specifically does NOT use Viechtbaueret, writing: "Viechtbaueret al., (2020) recommends sample size between 10-40 as adequate for feasibility study. Linacre, (2015) suggests a sample of thirty (30) respondents as appropriate for general purposes. This study however uses 60 respondents becauseadequate sample and item produce a good item, person separation and reliability in Rasch measurement (Fisher, Elbaum, & Coulter, 2010)." - https://journals.sagepub.com/doi/pdf/10.1177/0306624X1985848...

#5 uses N=43 ("With an alpha value of 0.05 and with an 80% power we need a sample size of at least 43 patients to detect a difference in validity and assess reliability") - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6886622/ . This is not the default example from Viechtbauer.

#6 uses N=59, using calculations directly from Viechtbaueret. It appears to be used correctly.

#7 uses N=53, commenting "There is no gold standard for sample size calculation in pilot studies. Sample sizes as small as 10 (Hertzog, 2008) and as large as 59 (Viechtbauer et al., 2015) have been recommended." - https://journals.lww.com/nursingresearchonline/FullText/2018...

#8 uses N=30 for each region, writing "Tamaño de la muestra: en cada región, se llevó a cabo un estudio piloto con aproximadamente 30 pacientes" - http://scielo.isciii.es/scielo.php?script=sci_arttext&pid=S1... . The 59 occurs in, for example, "La reducción también fue estadísticamente significativa (59%) al tener en cuenta el coste del tiempo de enfermería".

#9 points to the N=59 recommendation (correctly, for their experiment) but actually uses N=71 - https://www.mdpi.com/1422-0067/20/8/2024/pdf

#10 appears to have the discussed N=59 problem as it doesn't mention the 0.05 effect - https://mhealth.jmir.org/2016/3/e109/ .

#11 says "Pilot studies should include at least 12 patients [30,31], and cluster-randomized studies need more patients to correct for the clustering [32]. To have sufficient power for this pilot study, we intended to include 15 patients per nurse." where [31] is Viechtbauer. The value 59 is because "data of 149 (98%) patients were available of whom 66 (44%) were in the intervention group; ‡ = data of 135 (89%) were available of whom 59 (44%) were in the intervention group". - https://www.sciencedirect.com/science/article/pii/S026156141...

#12 says "In line with recommended sample sizes of pilot feasibility trials, 83–85 40 patients is deemed sufficient to explore feasibility, acceptability and potentially efficacy of the intervention, assuming retention rates of 80%, the true population consent rate will be with a margin of error of 13% (95% binomial exact CI)". The only "59" comes from citations. - https://www.sciencedirect.com/science/article/pii/S026156141...

I'm not sure how we're to have a take-home lesson when the false negatives so overwhelm the claimed evidence.

I'm also hung up on: "Capitalism and evolution work so well because of their merciless feedback mechanisms." because of obvious questions like "does capitalism work well for slaves?"


People seriously misunderstand expertise.

There's the perennial news stories like "Harvard MBA's don't know what causes Earth's seasons!?! LOL!". And the "He's a Scientist, so he must know...". And the "my local TV news meteorologist proved climate change isn't real". And in TFA, "But sometimes [experts] say something so obviously incorrect, [...] that the only possible explanation is that they understand very little about their field. [disillusioned gasp!]".

So, you're a computer engineer? Wow, you must be sooooooo smart. An Expert! And did you read Huckleberry Finn in school like me? Yes, thirty years ago? Great, what do you think of the raft trip as an allegory for mumble mumble? What, you've no idea?!? - LOL. But I'm glad you're an Engineer - tell me what kind of solar panels should I buy, and about driving trains. And, Ha ha, you didn't even notice the computer cable was loose - you obviously understand very little about computers.

If the last time someone touched something was in middle school, don't be surprised to find a middle schooler's understanding. If you ask a protein chemist a quantum chemistry question, don't be surprised to get a decades-old graduate student's answer. Don't be surprised if a respected fruit fly biologist, believes earthy-crunchy nuttery about the human heath benefits of eating nuts. Don't be surprised by scientists gossiping at conferences "Yeah, don't believe work from [first-tier PI] on topic X - he sees what he wants to see". Don't be surprised when a first-tier stone cold empirical physicist, says "I don't need to test it - I trust my gut to tell me how well my teaching is working".

Expertise can become ramshackle startlingly fast as you move away from someone's own research focus. Or sometimes from an advanced teaching focus. A focus with fear of being embarrassed by getting things wrong in front of double-checking picky peers. Getting things right is hard, and takes effort - which has to be incentivized, and monitored.

And it's not just "what is expertise" which is taught badly. If I told you "Yay, I found a good Professor of Computer Science, to write a Computer textbook, including language design, microservices, chip taping, and UX - I'm sure he will do a correct and insightful job of it", what would you think? So, what if you heard "Yay, I found a good Professor of Chemistry, to write a Chemistry textbook, including bonding, nuclei, materials, and nanostructures - I'm sure he will do a correct and insightful job of it"? Or how about "Welcome to <big textbook publisher>, new graduates, new science textbook writers. Don't worry that you only have a liberal arts degree, and no background in science at all, because we have a Scientist on call for questions!". Sigh. People seriously misunderstand expertise.


Great article.

> Do you have it in you to ignore 800 epidemiologists, and is it actually good idea?

Yes, I do, because I was tracking the wave of non-stop bullshit coming from scientists closely throughout all of 2020, and got completely fed up. But I'm not the norm (yet), and no, it's not actually a good solution.

We're in the beginning of the story of the boy who cried wolf. Eventually the whole town ended up like me: fully willing to ignore and oppose the mainstream information channels. When a real wolf came, the boy told the truth once out of 1,000 times, and the townspeople all ignored him and got their sheep eaten up. We need to find a way to crowd-source our trust mechanism if "trust the experts" starts to fail.

My suggestion is that any "expert" who volunteers an opinion that later gets falsified should be canceled permanently from the public sphere, with no forgiveness. They can go on publishing in scientific journals and selling books, but their opinion should not be accepted by the general public anymore.

The other solution is to keep ignoring the boy and start watching your own sheep.


So what main issues are you talking about that were "bullshit" advice from scientists in general exactly? You make a lot of super vague arguments with no specifics, and while it's nice to agree with children's fables like the boy who cried wolf, unless we agree on facts that happened, it's kind of a rhetorical argument with no real bearing in reality.

Also, I've got to say, "cancelling" anyone who's ever wrong from being an "expert" in the public sphere sounds like a really poor idea. Everyone is wrong sometimes and that's a crazy standard which would have tons of its own perverse incentives, probably even worse than the ones scientific journals and the like have now. Do you want scientists on the record terrified of saying anything besides the most vague and unfalsifiable statements? It's an untenable situation if you bring it to its logical conclusion. I do agree with you that there ought to be more consequences for scientists willingly spreading bad science though, beyond what there is now--but the solution won't be simple.

Given the vagueness of your post I'm not sure why you have the perspective that scientists produced a "non-stop wave of bullshit". From my perspective, I saw a bunch of demagogues and populists cynically manipulating a major demographic for personal gain while sidelining and ignoring the broader scientific advice and recommendations in the country the entire time, while over 350000 people died.


> My suggestion is that any "expert" who volunteers an opinion that later gets falsified should be canceled permanently from the public sphere, with no forgiveness.

That will lead to one of two things: (1) information paralysis, where no-one authoritative is willing to speak in public for fear of making a mistake (and so the public sphere is ceded to the ignorant); or (2) constant churn, where experts are constantly removed from the public sphere (because everyone makes mistakes), and so we only hear from the people who haven't yet had enough experience to make a mistake.


I have a more positive opinion of scientists than you seem to, and I'd like to explain why.

A big part of the problem here is selective media reporting of science. All fields have good practitioners and bad practitioners. ("bad" here can mean overconfident or bad at stats or outright fraudulent) As the parent article points out, in fields where objective feedback is less easily accessible, it becomes easier and easier for bad practitioners to survive. Many media sources will report only the opinions of scientists that support the positions they agree with. And the fields with the least objective feedback are often the very same fields that the media has strong opinions about, meaning that they can easily find somebody supporting their position. Even well-intentioned media sources can have difficulty with mistranslating the results or misinterpreting the confidence level of some piece of research.

I think the reason I have a more positive opinion is because I more often tend to look at primary sources: papers, or at least blog posts by researchers. That kind of stuff is often linked on HN, which is a big part of the value of this place. Looking at primary sources makes it harder to tell what's important, and what's just minor details, but the advantage is it allows you to see more of what's actually going on, instead of just getting the media's view of "the scientific consensus". Of course you still have to watch out for the replication crisis and all that, but it's easier to tell if the scientists did their stats right when reading their paper than to try and guess based on a journalist's write up of the results.

I guess the problem I have with your "permanent cancellation" proposal is that it doesn't allow for uncertainty. Have a groups of people try to predict the outcome of 7 coin flips in a row, and less than 1 percent will remain, along with maybe Persi Diaconis. You need to have some way for people to say "there's a 50% chance of heads", without getting fired. I agree that we should still keep track of people's predictions, and stop listening to them when they've been confident and wrong many times. That kind of "prediction checkup" is important, and I wish that it was more mainstream to publish checkups of that kind. Expecting people to keep track in their heads means that most of them won't bother, and will end up listening to the same doofuses over and over.


Many argue that the experts ultimately got it right (and are thus redeemed). I strongly disagree. Even when the public health types corrected course, their behavior left a bad taste in my mouth. There was no acknowledgement of having been wrong, no introspection on how their errors came to be, no discussion how they could be avoided in the future. Just aggressive enforcement of the new party line couched in appeals to authority. The whole enterprise reeks of blind dogmatism.


> Many argue that the experts ultimately got it right (and are thus redeemed).

Perhaps their opinion on that one matter they corrected was redeemed, but their general lack of understanding about the limits of their own knowledge was exposed, which is more damning for someone called an "expert."


And in time-sensitive situations getting something "ultimately right" is akin to getting it wrong.


Spot on, a broken clock looks correct twice a day. I have zero confidence that they're actually correct this time.


Which experts got it right?

The Swedish epidemiological team? The state of Florida epidemiological team?

Both regions of whom had better outcome per capita with less Covid restrictions than numerous states who followed the mainstream 'experts' advice.

How do you measure 'got it right'?


Cut n pasting the same comment[1][2] multiple times is rude IMHO.

I believe that there is a lot of noise in the numbers, and a large numbers of causal factors. Florida or Sweden could easily be random outliers, or that the actual reasons could not be due to the cause and effect you are using as an “explanation”.

[1] https://news.ycombinator.com/item?id=25740965

[2] https://news.ycombinator.com/item?id=25740976


There's a combined total of around 30 million people in those places. That's a decent enough population size to to make me wonder why places with a lot more draconian restrictions have worse covid outcomes. I don't know if noise is a sufficient explanation to me for 30 million people.


I'm baffled by your post because I honestly have made the opposite experience. I've been tracking expert advice on Covid and practically all experts have been right about almost everything related to the disease from the start of February until now. They even predicted the reactions of the population correctly over time, and the second wave in autumn in my region. There were minor disagreements about measures, but overall mostly agreement, and practically every expert has been and is still advising more stringent measures than the governments end up implementing. (Which is not surprising, given the different responsibilities at play.)

I felt not only well-informed, I actually felt and still feel a bit over-informed about the topic. My country of residence handles the crisis relatively well, so it's kind of silly and depressing to me to continue to collect data and look at daily Covid figures.

May I ask where you live? Maybe the US? Did you get your information mostly from print media, online media, or TV?


But which science and experts?

The Swedish epidemiological team? The state of Florida epidemiological team?

Both regions of whom had better outcome per capita with less Covid restrictions than numerous states who followed the mainstream 'experts' advice.

The experts were clearly lying about masks at the beginning of the pandemic as well. That doesn't give you pause in trusting them?

Are you even looking at dissenting experts opinions? Following a single group of experts is not enough to get a reliable measure.

How do you even define correctness in this case?


> May I ask where you live? Maybe the US?

How'd you guess? :)


That explains the different experiences, I live in continental Europe and mostly followed local expert interviews and journal articles. It seems to me that the topic was politicized early in the US. I've followed Fauci's meandering diplomacy from time to time, as well as disagreements between federal and local responses. Maybe some trade-offs were made.

There is one complaint I had about experts in Europe, too, and that is that they resorted to a "king's lie" about masks at the beginning of the pandemic. Certainly none of them said you shouldn't wear a mask, but they downplayed their importance when there weren't enough of them yet for the medical staff. IMHO, they shouldn't have done that. Instead, governments should have regulated mask trade and distribution from the start like in some Asian countries.


> I've followed Fauci's meandering diplomacy from time to time, as well as disagreements between federal and local responses.

It speaks volumes that you identified what country I'm from and the person I was really talking about just from a vague rant about "scientists."

My rant may have been aimed at one or two people who appear to be compulsive liars at this point, but the author of the post is complaining about a problem with science in general. And it's not something you can attribute to anti-intellectualism or the perception of science; it really is a 'dereliction of duty" on behalf of scientists, and the madness of 2020 exposed these problems to the general public. More on that problem here: https://www.youtube.com/watch?v=7leWXMr3ayk

As an aside, I would ask those people (not you) who blamed me for feeling this way if they really think browbeating people who speak out will fix the issue. Why do we all have to keep blaming each other? Why not work together to hold our experts to a higher standard?


There wasn't a problem of the information not being there, there was a problem of 30-40% of the country believing angry demagogues over experts, and said demagogues strong arming the scientific community with political and social power. Even the most basic precautions like wearing a mask were politicized while the scientists practically begged for people to wear them. They left the WHO and refused to enter the vaccine pool. How do you fight that? Blame science? It's ridiculous and anyone claiming it's primarily science's fault I seriously question their good faith to debate here.

I was here the entire crisis (and I'm not a US citizen btw) and had easy access to the same advice Canadians, Europeans, and whoever else in the West got as well. The information was here, but the political leaders often downplayed it or simply refused to follow it. What did we get? The numbers don't lie, and take a look at the results of the current political climate and the death count. This wasn't a "good way to go" being overly skeptical of scientists. Like, there are problems with science, but this is if we're being cute "throwing the baby out with the bathwater"--or if we're being less polite--so incredibly stupid it's largely contributing to the deaths of hundreds of thousands of people.


I suppose we are at a huge impasse as a society then. Because someone like me holds the opinion of non-experts as having zero value. I will not respect and do not want any policy being set by people with your attitude. I will continue to follow the science, and if I feel that people like you who ignore it are threatening my life, then I will do everything I can to stay far away from you and keep you far away from me.

I'd be totally for splitting society into those who respect science and experts, and those who don't. I have no desire to live in your world.


> I'd be totally for splitting society into those who respect science and experts, and those who don't. I have no desire to live in your world.

What of those of us who respect science, but have less respect for experts? I strongly believe in science, and believe it to be humanity's "candle in the dark". I have little respect for individual experts who insist that their advice must be followed by all those they deem to be lesser than them, whether science is their justification or not.

Science as a process is the best tool we have. Scientists are human beings, and are just as flawed as everyone else. The contexts they operate in have also shown themselves to be highly susceptible influence by political trends. There was no end to the "science" that would show you the inferiority of minority races 100 years ago. You are not doing science a favor by steeping it in dogma and being blindly deferential, you're turning it into a religion.


If you can demonstrate your position via the scientific process then great I'm all in. If you can't then I'm going to side with those to try, even if their technique is less than perfect. Today our scientific experts are some of the very few who even try to follow the scientific method. I acknowledge that there are a small few outside of the profession who also try to apply science, but the number is so small that it's almost not worth mentioning.

The reality is that few even have the means to actually perform experiments. Certainly outside of computer science, that is true. It's unfortunate that the resources we allocate as a society towards scientific pursuit is such a tiny fraction of the resources we spend as a society. But I suppose that is more a reflection on the value of science to the general population than it is to anything else.


Make us go to Galt's Gulch?

I think bubbling society is the future. We need to aim for this. But not physical barriers, electronic. IE your phone tells you if this is a pro or anti expert cafe. Where to sit on public transport, even what path to walk.

At an extreme, ideally this time last year when experts told people to not wear masks, those who chose to not believe them would within our internal world become like an Asian country and have kept it at bay while the society around us burned. Bubbles don't interact at all. But that's fantasy with the current structure.

That said, although I don't want to associate with people who believe in experts, but I also know the alternative might not be better, I would lose a lot of friends. Bubbling will make people chose between ideals and reality.


"Experts" didn't really tell people not to wear masks. A small number of political bodies did, but the scientific consensus was pretty clear right from the start. In fact, even those who said not to wear masks argued it from the point of view that those masks should be allocated to medical workers because of a shortage, not that they didn't work.


That's where we disagree, that's not the reality I lived and that's a fundamental part of my belief system.

I do agree to the physical separation you talked about, but it's simply not possible. Seasteading costs are so high you/I won't be able to pick our friends.

I do think the way forward is intermixed lives where we could share roads for instance but never connect. I think that's possible in the future. But as yet we can't even achieve it electronically, case in point :)


> I'd be totally for splitting society into those who respect science and experts, and those who don't

Science not as a search for knowledge, but as a search for daddy figures to respect and obey. Nothing is as harmful to progress as this blind devotion to expertise.


Why is the answer to make me the enemy just because I spoke my mind?


Society is a social contract. I participate in it because I think we are stronger, safer, and better off together. But if your idea of a social contract is to disregard experts and the accumulated knowledge that we've worked centuries to obtain, then I don't think we are better together. In that case I think you are a net negative on the social wellbeing, and on my wellbeing, and I do not want to be in a social contract with you.

Working together and benefiting from each others hard work is not an innate right. It's a right bestowed by joining into a particular social contract, be that a national one, or a local one. You are saying to me that you will not pull your own weight. In fact worse, you will work against the common good. Thus I do not want to work together with you.


Because your considered opinion leads to poor results.


But which science and experts?

The Swedish epidemiological team? The state of Florida epidemiological team?

Both of whom had better outcome per capita with less Covid restrictions than numerous states who followed the prevailing 'experts' advice.


“My suggestion is that any "expert" who volunteers an opinion that later gets falsified should be canceled permanently from the public sphere, with no forgiveness. “

Why only experts? Would other people be allowed to publicly say wrong things?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: