> Why are't more labs outside China making LK-99 and publish videos?
Good rigorous science takes time to produce. It can take anywhere between several months to a year or more, and the career implications for rushing something out that is later found lacking is not great.
> the career implications for rushing something out that is later found lacking is not great.
On a tangent, this idea of reputation keeps on coming up in this whole discussion and I am burdened by it in a way I don't fully understand. The way people have talked, if this LK-99 doesn't work out, then it is almost as if those who published this did something _morally_ wrong. Well, morally wrong is not quite true, but the way people talk about it tanking their reputation it feels like such a strong statement. Is there some way we can focus on the science and not get bogged down in the very human reputational part of this whole thing? It's almost as if a good chunk of the scientific community don't care about the benefits the science brings but the reputational benefits.
A scientist's career depends on their reputation. For them to have the best opportunities, they need everyone to have the highest respect for the quality of their work. If they damage their reputation, it could ruin everything they've spent decades working toward.
Of course they care about science itself, but there's a limit to what risks they'll be willing to take when it affects them personally.
For people with a relatively low reputation (or no reputation, i.e. unknown), taking a risk is not a bad move. They have less opportunity, and there's a chance the risk might pay off and boost their reputation.
For people whose reputation is already good, the risk is less worth it. They don't stand to gain as much, and they could lose a lot. So they're less likely to do it.
It's a direct side effect of reputation and funding being closely correlated: if your reputation is that you put out stuff that doesn't work you won't get funded. This is dumb, but that's how the world works. That's why you almost always see the 'more research is needed' line in various papers, it is most helpful when seeking for funding that one paper will lead to another. But (unfortunately) negative results aren't nearly as often published, and that is because they will not get cited as much in follow on papers. It's all the result of metrics based meta analysis of papers, aka the 'impact factor' (which, no kidding is a copyrighted term), once that got established that became the thing that science partially optimized for.
During the 'golden age' of science, the time of the Royal Society the fields weren't specialized at all and the publication mechanism was scientists sending each other interesting stuff by post. At that time there was no meta analysis at all and there was so much low hanging fruit that the 'gentleman scientist' could make big breakthroughs in their home laboratories. But as that low hanging fruit decreased the educational paths required before being able to do meaningful science became longer and longer, then specialization set in and the costs of doing science went up. That's how we arrived at grants used to fund science.
> That's how we arrived at grants used to fund science.
A lot of these gentleman scientists were independently wealthy aristocrats that didn't need hand-outs. The fact that we don't to a meaningful extent have that sort of leisure class anymore is arguably a much bigger reason we need grant funded science these days.
It could be argued there a bit of a replication of the pattern in the space race between Musk and Bezos, but they're missing the sort of well education the aristocrats of yore would have had[1]. They employ a lot of people to do the actual dirty work, but that's not really a big difference from back then either.
> It's a direct side effect of reputation and funding being closely correlated
This sheds some light on it to me. I guess what partly surprises me is that people seem to care more about reputation than just a means for improving the signal to noise ratio in papers or as a estimate on what will give you your biggest bang for your buck.
The other issue I see come up is the idea that if there is no signal to noise filter, then a scientist might "waste their time," either reading the paper or trying to replicate. But to me, it sounds a little bit like trying to avoid actually doing science. And peer reviewed papers don't imply excellent quality either. You should evaluate papers on their merits. It is your job, as a scientist, to evaluate the most productive approaches based on the merits of the science being done, not based on reputation.
Working in science is different from working in other fields in that you work with things that are not well known, where a lot is unclear and your job is to move information out of this murky regime out into the light.
This means it's really easy to just claim something, that will be really hard for others to verify.
And wrong claims are incredibly common. It's easy to delude yourself through all sorts of biases or good old sloppy work.
That's why, when scientists talk to each other, they need to know that the other person is a serious scientist and won't pollute their mind with nonsense.
If you develop a reputation for making baseless claims, people will stop including your claims in their own thoughts.
This is an good point. I think there is a difficult balance to strike between open communication and adding confusing noise to the scientific literature. Partly for historical reasons, there is an expectation that published science is correct to the best knowledge of the authors. Since writing, publishing, and reading papers takes a lot of time and effort, there are advantages to this precedent. I work in physics, and I can tell you that if we published all of our half-baked and often wrong ideas, we would waste a lot of people's time, at worst sending people down blind allies that we would soon rule our ourselves. I suppose tying reputation damage to publishing incorrect or misleading results is then part of the incentive structure that keeps publication quality high. At the extreme, there's was one recent LK-99 paper that had an obvious glitch in their data, and instead of taking a bit more time to debug it, they just posted the paper and speculated about what was going on. If that's how much you're rushing, how do I know I can trust your data?
But there are costs to this. There are big gaps between what people discuss with colleagues and what gets published, and the is no forum to publish partial or negative results, except maybe conferences. Ideally published papers stay at a very high bar, but there are other forums to publicly share work in progress. In a way Twitter is becoming this.
>Good rigorous science takes time to produce. It can take anywhere between several months to a year or more, and the career implications for rushing something out that is later found lacking is not great.
By my count, 18/20 top universities for chemistry research is in China. The first US university in chemistry is MIT at 23.
One of the attempts is by USTC, the second best university in the world for chemistry research according to the Nature link.
China's lead in chemistry research is also translating directly to real world applications. For example, CATL and BYD combined own more than 50% of the car battery market. Six of the top 10 car battery makers are Chinese companies. [0]
It's not surprising that most of the first replication attempts are from China.
I think for a lot of people this whole saga is probably the first time that they realize that a ton of original work is done in Asia, rather than that it is just our manufacturing hub. They have to adjust their mental model to account for a view of Asia that is in important ways outstripping the West in terms of resources and combined brain power. The amount of scientific output in Asia is astounding, and the number of active scientists dwarfs the numbers in the West. If they would switch to stop publishing in English it would be quite amusing.
Lmao, all that implied was that rushed papers don't have time for in depth analysis, only reproducing the material without further insight. Which is absolutely true; we're not super-human, it takes time to replicate the material and then time to disseminate why/how it works.
You inferred logic that was never present in my reply. It's a correlation/causation error. As it is fully possible for things to happen in China that are not caused by the specific character and nature of Chinese people; pointing out that something has happened in China is not an implication that it's caused some the special nature of Chinese people or Chinese society.
If you go out of your way to look for uncharitable ways of interpreting what others are saying, you will find them.
Instead of going on the offensive and taking such an uncharitable interpretation as a given, if you truly can find no charitable interpretation[1], maybe ask for a clarification rather than jumping to conclusions about unspoken implications.
While it's true that good science often takes time, I believe it's not necessarily the whole picture. In fast-paced and rapidly evolving fields like this one, swift and open sharing of progress can be incredibly valuable. This is evident from the recent developments in AI and large language models (LLMs), where real-time collaboration and data sharing have led to exponential advancements.
Good rigorous science takes time to produce. It can take anywhere between several months to a year or more, and the career implications for rushing something out that is later found lacking is not great.