This is obviously a quickly produced paper to get the finding out but.... I find there to be so much missing and so many things poorly worded or posed about their process that would have taken zero time to expound upon, it's infuriating.
"All the reactions are carried out under 10^-2 Pa"
OK, I know they mean 10^-2 of vacuum. But why not say that? "10^-2 Pa" isn't enough. Was this a full vacuum oven? Done in sealed quartz vials? Was there a purge, like argon, or just air?
If you look at the oven temperature profiles, you can see the ramp up time (0-2hr, 0-2hr, and 0-4hrs respectively), and the hold time, but the ramp down time isn't specified! There is no cooling rate, it just shows... a line drop off, with no end time. No label. This can be very critical. Were these just pulled straight out and air quenched? And were they kept under vacuum until at room temp or not?
Like, adding extra experimental setup details would take no time whatsoever to include in a paper and yet these researchers just don't do it. It's either pure fucking laziness or some sorta holier-than-thou gatekeeping that comes from theoreticians, or a combination, and it is the reason that replication is so hard in science right now. I would hope that no journal would accept this shit.
I agree, it could use more of the easy details, however, I don't have much issue with 10^-2 Pa, since standard atmospheric pressure is 101,325 Pa, so 0.1 Pa (10^-2 Pa, or 1.45x10^-6 psi) is definitely understood as a vacuum, but you're correct, why not just add "vacuum" for thoroughness.
I know mercury gauges use to have a 0 to 30 scale sometimes (not 0 to -30), and that was confusing!
There is intense pressure to publish as many papers as possible at Chinese universities. This has led to a big problem of faked or just bad research papers coming out of China, so people are generally skeptical of them
As a non-Chinese scientist, I can attest that there is intense pressure to publish as many papers as possible pretty much everywhere, and this leads to a problem of bad research across the world.
China may be among the worst offenders in this respect, but the particular thing the OP is asking about doesn't strike me as a Chinese-specific thing at all. I see non-Chinese scientists rushing half-baked results to arXiv all the time.
> It’s so bad that China’s courts have called for the death penalty for scientific fraud.
To add some context, the death penalty is used much more widely in China than a Westerner might realize, so to them this idea is not quite as radical as it might seem. The PRC government is a bit tight lipped about exact policies, but it's known the death penalty gets employed for things like corruption and even major economic crimes like fraud or money counterfeiting. And of course drug offenses and the violent crimes.
And for speaking out against the power structure. The death penalty for bad action is only if you are on the wrong side politically. Xi’s family is among the wealthiest in the world due to corruption.
Please refrain from whataboutism. We're not discussing Biden or the US. We're discussing the corrupt Chinese elites and the authoritarian regime of the PRC.
Replication is just a nice term for it. It can just as well be a crisis of fraud with the failure of replicating fraudulent results as the consequence.
Scientific fraud and garbage results is a huge problem Everywhere. It's actually big enough to nearly invalidate the entire field of psychology.
“More than everyone else together” is meaningless without considering the relevant population sizes (in this case, the relevant population is “number of researchers producing papers”).
If China has half the worlds academics-writing-papers, one would _expect_ them to be responsible for half the fabricated ones.
Chinese researchers [1] publish fewer total articles in Nature than American ones [2]. It's just one journal, with an admittedly western-leaning audience, but that's what the QZ article focuses on. It does include the caveat that it's possible that translation problems played a bigger role in the Chinese retractions than they did for other countries' retractions, although it would be surprising if that were the only cause of such a widespread problem.
> The move, as Nature explained, groups clinical trial data fraud with counterfeiting so that “if the approved drug causes health problems, it can result in a 10-year prison term or the death penalty, in the case of severe or fatal consequences.”
To be fair, according to TFA the death penalty is only for clinical trials where the drug or procedure, due to faked or doctored data, causes severe or fatal consequences.
It actually seems pretty reasonable to me. You develop a drug that actually kills people but fake your data to show it saves lives. Then you make money while people start dying, essentially killing people because your career was more important than reality. I’d call that murder.
> In the US if you fake data you can become president of Stanford (for a while).
One wonders: is the unearned time as an elite university president worth being humiliated by an 18 year old on the student newspaper and being forced to resign in disgrace?
For me that's an easy no, but others may have different preferences.
It's not biased, it's a fact. I see it all the time as PC/AC/reviwer/etc.
There is deluge of terrible papers from China that are just a mess, below any imaginable standard. Ones that labs in the EU/US/Russia/Japan/etc. don't put out. Yes, everyone has to publish, and there are bad papers out there, but the volume and low quality from China is unmatched.
That is crazy talk. I don't believe it. Korea and Japan both have crazy levels of "face" in their culture (and Taiwan to somewhat less extent), and they all produce huge amounts of high quality scientific papers.
Isn't the population of China much bigger as well? Laws of percentages and what not can go along way. Has anyone displayed a percentage of rushed/inaccurate studies between countries to see if the lines normalize?
i saw such a chart a few days ago, specifically for withdrawn papers -- forgive me if i can't dig it up again but china made up something like 50% of the total number
Population of researchers != Population it's more than 4x as large in terms of population but only has ~25% more researchers while having around 50x the retractions.
> Isn't the population of China much bigger as well?
Yes, but official government stats have also been overstating the population a bit. I think it mostly affects the younger generation at present though.
lol I don’t know where you got this impression but during the one-child policy phase it’s widely known that people would underreport their kids to avoid fines. Why would the population be overstated?
> lol I don’t know where you got this impression but during the one-child policy phase it’s widely known that people would underreport their kids to avoid fines.
The news mostly. But also this guy[1], who I guess is using the leaked data. But even the official data shows a huge drop in the 0-4 bucket.
> Why would the population be overstated?
As I understand it[2], the local governments are reliant on two sources for income: land sales and money from the central government. Both are influenced by demographic change, so there's incentives to adjust the numbers upwards to keep revenue coming in.
If those certain countries have a very big population I'd say is more of a quantitative issue. If china had 10 scientists they couldn't pump as many papers, fake or good or not as if they're 1million scientists
>> What personal benefit does this gain someone to publish it so quickly? Is it just social media attention?
> There are no prizes for being second in science.
The reply (second quote above) fits in context, but there is more to it.
1. Publishing early at the expense of quality has a way of catching up to one's reputation. (Hopefully.)
2. History has many examples of scientists who were "too early" or not "in the right place at the right time" to get recognition.
3. A result may get little attention in one field but a lot in another. One example that comes to mind are string-matching algorithms. Sometimes they seem a dime-a-dozen in CS. But the "right" ones have transformed DNA sequencing.
Your point is fair.
It feels it also underlies the importance of "continuous teaching and learning", global (or assigned) peer reviewing.
At moments in time where discoveries like this one happen, one could hope that beyond "open publishing" like Arxiv, comes a true "science in the open", with room for cooperation.
Looking forward evolving further our current system.
The formula and atomic structure should not be patentable (like gene sequence), but a novel and non-obvious industrial process to fabricate such material could be patented. There may be many different such patents.
The last week reminds me of the story of Bardeen & Brattain's invention of the transistor (e.g. in The Idea Factory). It barely worked and heaven forbid you bump the table. There was even a third slighted figure who wanted credit. Part of me wants to be skeptical but OTOH if it's hard to reproduce that seems totally normal. How exciting that some other people have got it working now.
Things also tend to be invented at about the same time at different places. I wonder if there are other people who were this close to invent it but just couldn't get it right.
Last year I read The Double Helix and The Code Breaker within a few months of each other, and what really stood out to me is that while we celebrate the first person who discovers something, in reality there were several teams all racing to the finish line, and were sometimes days or only even hours behind in presenting their findings. Yet only one team gets the accolades and their names in textbooks, the others are a mere footnote at best.
This was my first time reading about the history of discoveries like this, and I guess prior I thought these celebrated names had moved humanity forward by decades if not centuries by connecting the dots of the runes of the universe, but the reality is really very different.
Another example that might be of interest is the use of neural networks in Monte Carlo Tree Search. In 2017 there were two papers published showing how effective the combination of NN’s and MCTS are for planning in board games.
One is an algorithm called ExIT published by Anthony et al, it showed excellent performance on a somewhat obscure board game called Hex. The authors using a typical academic lab setup to achieve this. It has a very impressive 300ish citations.
The other paper is AlphaGo, which has about 10k citations and a Netflix documentary. There are some algorithmic differences that probably make AlphaGo strictly better than ExIT but the big difference seems to mostly be that one group had like a thousand GPUs for a month.
It's even more stark when you come to realize how big the pyramid is just under a new scientific breakthrough. But there are also examples of people that stayed the course against common wisdom (and sometimes while being derided) who eventually succeeded in what they were doing. This is to me the kind of science that is most deserving of prizes, to encourage others to keep looking even when the answers are far from obvious.
I feel that way about string theory. I am merely an airmchair physicist, though I did have a physics minor in college, but it just feels... wrong. Ever more layers of complexity to explain away holes, and no real predictive power. But at least as of ~10-15 (and maybe still today, I have lost touch with that world) years ago, saying this in any serious setting would not make you any friends, nor get you any funding. It feels to me that we have likely squandered decades of brain power going down that hole.
I hope there are still those toiling away doing unpopular work, that can make a breakthrough....
I'm far outside of these social circles nowadays, but I stumbled across this youtube video of a phd candidate talking about string theory recently (https://www.youtube.com/watch?v=kya_LXa_y1E), and it seemed to imply that the pendulum in those social circles is swinging back again.
Two independent research groups, one from Japan and one from the US, simultaneously published their discovery of the NdFeB magnet in the same issue of the same journal.
I don't know this particular story, but — FWIW — it's common for different teams working in the same field to be aware of each other's manuscripts and impending submissions. When two teams are about to submit papers with the same discovery, they'll often work with an editor to put it in the same journal issue.
By coordinating a simultaneous publication they can get extra publicity for the discovery, both get the first-mover advantage in citations (both papers get cited by everyone), and also get breathing room to be fully rigorous and write the best possible paper.
Quite similar is Darwin and Wallace presenting their evolutionary theories together. Darwin had been working on his for much longer, but spurred by the possibility of losing priority, he presented Origin of Species as a book rather than waiting for formal review + journal publication.
Yeah, looks like it works that way mostly - based on the examples written up in "How Innovation Works: Serendipity, Energy and the Saving of Time" by M Ridley (https://www.amazon.co.uk/How-Innovation-Works-Matt-Ridley/dp...). Having done some R&D myself I tend to think of re-search as "repeated search for something that works" these days. Luck (or lack of) plays a big role too, the circumstances and the personalities of the actors involved likewise. Another book on a similar theme (but for medicine specifically) that I really enjoyed reading is "Happy Accidents: Serendipity in Major Medical Breakthroughs in the Twentieth Century" by MA Mayers (https://www.amazon.co.uk/Happy-Accidents-Serendipity-Breakth...).
Apparently the theory behind LK-99 was inspired by the studies from Eastern European scientists. However, after the Soviet collapse, the study was lost.
The twitter thread linked below says that it was an independent discovery and the Korean professor was not aware of the prior knowledge until after the discovery, so it's not inspired as much as parallel development.
Cheeky, but isn't nationalism/patriotism just rational self interest? Why wouldn't I want the system I pay taxes into to put me and other people in this boat first?
I like this definition, personally:
“Patriotism is supporting your country all the time and your government when it deserves it.”
― Mark Twain
It is rational self interest, but self interest beyond a point is selfish, which is generally considered a negative personality trait.
If you want to put it in game theory perspective, normalizing selfish behavior means that when you could benefit from something in another country but won’t you’ll be the one suffering discrimination.
If we can reach agreements international cooperation can benefit all parties better than the greedy solution.
Funny enough Bardeen went on to work out the theory of conventional superconductivity later (with two other researchers) and got another Nobel prize out of it. What's your issue with Shockley btw?
Nobody can pick fights with Shockley any more (he's dead) - but Shockley is almost a household name, and Bardeen/Brattain's aren't. It's worth trying to adjust the record, both because Shockley was an abusive jerk and because he gets more credit than is due for the transistor.
(I'm not denying that Shockley was brilliant and effective.)
Why are folks trying to prove that this material is a superconductor in roundabout ways, like levitation, dimagnetism, etc? What is the reason they don't just test to see if electrical current flows without resistance? Surely there is something I am missing here.
With the current fabrication process, they're only getting a chunk with LK-99 particles sprinkled in. Since nobody yet knows how to fabricate a pure chunk of LK-99, it'll be hard to measure the true resistance of LK-99.
>Most likely, the material LK99 as reported in [1-3] is a heterophase structure, with co-existent non-superconducting constituents. This may yield superconducting droplets surrounded by nonsuperconducting material...
>In fact, I find that Cu on this Pb(2) is 1.08 eV more energetically favorable than Cu on the Pb(1) site, suggesting possible difficulties in robustly obtaining Cu substituted on the Pb(1) site.
The paper from Dr. Griffin at LBNL suggests copper atoms have to be placed in a specific (but less likely) position in the molecule to result in the desired flat band characteristic. Also, the original authors and the labs who were able to replicate LK-99 are reporting they had to make multiple batches to even find a tiny piece that shows levitation. This suggests that you just have to be very lucky to produce a sample with high enough concentration of LK-99 to observe levitation.
If we can somehow confirm that LK-99 is truly a room temperature superconductor, billions of dollars of R&D fund will pour in to improve the fabrication process. When the first transistor was invented, people probably weren't imagining that we'll be mass producing them in nanometer scale in the future. Or maybe LK-99 will be stuck in a lab like graphene. Who knows?
From a purely scientific angle, so presumably less amenable to mass production, why doesn't anyone use FT-ICR mass spectrometers to assemble say a unit cell?
There is a good classic primer on FT-ICR, mostly focused on analysis (mass spectrometry) but also mentioning activation energies for reactions and measurement of kinetics etc.
If you dunk a bunch of chemicals (for simplicity think wet chemistry) in a vial, all reactions and side reactions are simultaneously occuring, so one has little control over what happens on an atomic scale.
FT-ICR can be used to observe the state AND to manipulate the state. Its like having a compact particle collider, but instead of the high (TeV) energy in CERN etc. its just chemical energy levels.
It happens in high vacuum, so low densities of species, hence not amenable to mass production.
But the instrument is both eyes and hands: one can identify the frequencies corresponding to each ionized molecule, and selectively energize or de-energize specific species to encourage or prevent main and competing reactions, by pumping or damping specific frequencies.
One may build up a molecule in elementary steps and eject finished molecules. Those steps can occur at the same time in the same vessel. Its like having a miniature digitally controlled chemical plant, without having to redo all the pipework if you decide to use a different pathway here or there.
The good news is that we should know within a few weeks if rock surgery is effective. If that works they can increase the purity and sample size without needing to find a new creation process
There are a ton of suggestions floating around already on how the efficiency could be improved and some of those are very clever and involve using the superconducting properties of the individual grains to help sort them out from the bulk material.
How do you explain the almost three orders of magnitude drop in resistance in Fig. 6d in one of the original articles ( https://arxiv.org/pdf/2307.12037.pdf ) with a few LK-99 particles sprinkled here an there? There must be a current path along which more than 99.8% of the material is in the supposed superconducting state. So the particles almost touch but not quite yet?
I think the more likely explanation is that the particles do touch each other but the interface is not superconductive. In other words, it is a polycrystalline material, and most of it is LK-99, but the grain boundaries are not a very good conductor. In conventional superconductors grain boundaries don't disrupt superconductivity because they are 3D superconductors, but in this allegedly 1D superconductor the superconducting channels in most cases don't meet at the grain boundaries, so the current has to overcome the resistance of some material that is almost an insulator.
If that is the case it will be difficult to produce a material that is macroscopically superconducting. But I hope researchers will be able to make single crystals that are large enough for resistance measurements so that finally it can be determined if this material is a superconductor or not. For practical uses the best result that can be achieved with this material may be a metal-LK-99 composite where the LK-99 particles lower the resistivity of the metal by 50-90%.
I believe that with the synthesis method proposed by the Korean team, i.e. by the chemical reaction between a certain kind of lead sulfate with copper phosphide, the chances of progress are slim.
The Korean team appears to have been stuck for several years by the lack of reproducibility of this synthesis method. While it was a great discovery that has shown that this material must have some very interesting properties, perhaps even superconductivity at ambient temperature and pressure, in order to be able to measure its properties and be able to evaluate the possible practical applications, a much more precise method for enforcing the desired crystal structure is required, than mixing powders and baking them into a ceramic.
Perhaps such a method for producing samples with deterministic properties would be to develop first a method to grow monocrystals of the special kind of lead phosphate that forms the base crystal structure, maybe by drawing the crystals from melt.
Once monocrystals of this kind of lead phosphate are available, they could be doped with copper, e.g. by ion implantation. By controlling and varying the parameters of the process, e.g. the angle of incidence and the velocity of the ions and the thermal profile used for annealing, it is likely that reproducible samples can be produced, where the copper ions substitute lead in the useful places and not in the others.
By this method it would be possible to produce only thin layers of LK-99, but that should be enough to enable the characterization of the material.
Moreover, because LK-99 is very fragile, it is unlikely that it could be used to make cables or coils. Practical uses where LK-99 would be deposited as thin films are much more likely.
As an alternative to ion implantation, which might be able to produce thicker layers, perhaps once monocrystals of the base lead phosphate are available it may be possible to develop some method of chemical vapor deposition, to grow epitaxially a layer of LK-99 over the base crystal, but with such a method it is less obvious if there is any way to control which lead atoms are substituted, though this may depend on the orientation of the base crystal.
Drawing from melt to achieve a consistent crystal orientation was my first thought as well.
Looking at the Griffin paper, the Pb(2) site is described as being 1.08eV 'more energetically favorable'. I am having trouble understanding what this means.
Years back I did MOCVD semiconductor fabrication research, I never reached a mastery of it but I am still trying to leverage that understanding here.
During growth, adatoms that incorporate into proper crystal lattice locations enter a lower energy state compared to those in imperfect locations. The energy state is lower in the sense that it requires more energy to remove them from that location. Hence careful control of temperature allows you to selectively favor incorporation into these low energy locations e.g. choose a temperature high enough to remove adatoms from 'imperfect' locations but low enough to not remove them from 'perfect' (low energy) locations.
So when the author says 'energetically favorable' am I to understand this means the Pb(2) location represents a lower energy state (i.e. more difficult to remove Cu from this location) or the opposite? Or something else entirely?
I have heard people much more knowledgeable than me say that measuring true zero resistance is actually quite difficult and takes some degree of specialized equipment, especially with such small samples. That may be part of it.
Just getting the probes to connect reliably is tricky. depending on how large the superconducting features are it may be anywhere from just difficult to next to impossible to do accurately (for instance, if the size of the superconducting features is smaller than the probe size).
Right, if our best regular conductors (used in your ohmmeter) are ~10^-8 and superconductivity is (by convention) less than 10^-11, one can see right away the simple regular methods won’t work and some cleverness is needed.
The conductors of your ohmmeter are not that important, though. You can work around that by using four-terminal sensing, and you can of course also calibrate your probes by directly touching them together. Even if your ohmmeter conductors have a resistance of several ohm, you could still get an accurate measurement if your tool has a high enough resolution.
A bigger issue is going to be sample size. A 1mm-diameter 1mm-long rod of silver has a resistance of about 20 μΩ (or 2e-5) at room temperature. That's already getting tricky to measure with lab-grade equipment without pushing insane currents through it, let alone anything even smaller. If you want to measure a 1m-diameter 1m-long silver rod (which would be 0.02μΩ or 2e-8) you could just push a few thousand amps through it and reliably measure that using a household multimeter in the mV range - but do that with a small sample and it'll evaporate.
> Even if your ohmmeter conductors have a resistance of several ohm, you could still get an accurate measurement if your tool has a high enough resolution.
Not that low in range though, you will end up seeing thermal noise that dwarfs your measurement.
> superconductivity is (by convention) less than 10^-11,
Ah, so you're saying that superconductivity is not actual zero resistance, but something close to it, and in fact only a factor of 1000x less resistive than the best conductor?
If that is so, this is something that I had previously thought would make a lot more sense to me.
But in that case it's not intuitive to me how SMES is possible with a 0% discharge rate. Shouldn't a significant fraction of the electrons looping around the coils be lost after many loops? (I know very little about electricity, as you can probably tell, never mind superconductors).
The difference now is that we're seeing a premature preprint being replicated in real time.
Even in that paper, the authors note: "The way the samples have been prepared seems
to be of crucial importance: Michel et al. [21] obtained a single-phase perovskite by mixing the oxides
of La and Cu and BaCOa in an appropriate ratio
and subsequent annealing at 1,000 ~ in air. We also
applied this annealing condition to one of our samples, obtained by the decomposition of the corresponding oxalates, and found no superconductivity." And you can see that in their resistivity/temperature graph of samples prepared using different protocols.
Considering how that preprint has sparked interest in other research institutions and multiplied the resources allocated to the problem, I would say this publication was not premature, it's most other research results that are late.
The paper was leaked early. The team wanted more time to get more attempts at producing it and improving their yield. The paper also wasn't generally up to their writing standards. It's essentially an early draft that was leaked.
That's right. The standard way to measure resistance without being sensitive to contact resistance is called the four-terminal method, see (https://en.m.wikipedia.org/wiki/Four-terminal_sensing). You drive a current using two outer wires, but then detect the voltage across the sample using a different pair of wires. You'll measure zero voltage if it's superconducting, since V=IR. Or if one of the probe wires became detached.
The first LK-99 paper used this method to claim zero resistivity, but people complained that if the inner probes lost contact, that would also be consistent with their data. This criticism doesn't totally make sense to me, since the apparent superconductivity came and went in the expected way as they changed an external magnetic field. I don't understand how a loose terminal could mimic that figure (I think it was in figure 1).
One case in which you can see something similar to SC transition is percolation, where the sample is mix of regular metal and insulator, and you (accidentally) connect current contacts to metal but voltage probes to insulator [1]
It also doesn’t make sense to me because you would expect they didn’t just say “omg super conductivity!” on their first measurement and presumably measured more than once, verified connections, and generally assumed scrutiny would be high on any claim and (from what I’ve read of the history so far) the reason it’s from 1999 and published now is they didn’t believe it was real to begin with but believed it was a measurement mistake. This doesn’t mean others should be credulous and replication is crucial. But it seems to beggar reality to assume they made such an amateur mistake. I would assume outright fraud first.
You drive current through the outer probes. The inner probes measure voltage and are not driven. When measuring non-zero resistance, you can vary the driving current to confirm the voltage you measure also varies linearly (ex thermal effects, capacitance of the material etc.). When measuring zero resistance, you can't distinguish vs a voltage probe error.
> a badly attached probe could also result in zero resistance for example
No, a badly attached probe would usually show a larger resistance, not a
smaller one. That's actually the easiest error to make, making improper contact with the sample. The resistance is measured indirectly using a reference
current. So you'd measure a higher resistance or a break rather than
zero if a probe were not attached correctly (unless the two voltage
probes are touching but that would normally speaking be spotted).
The diamagnetism is simply easier to verify using an impure or small sample.
It's apparently hard to measure zero vs extremely low resistance with the two probe setup you are imagining (I guess because the probes and wires aren't superconducting, so most of the resistance in the circuit is not in the sample). The graphs I have seen are all made with a four probe setup [1], where a constant current is run through the outer probe pair and the voltage across the inner pair is measured. If the inner probes have contact issues, the voltage (and inferred resistance) drops, potentially to zero.
That’s what he said. You inject a known current and measure the voltage drop across the item you want to measure the resistance of, and then use ohms law
In a four probe setup you are measuring voltage, not resistance. But you are right that the measurement can still influence the results when the voltages measured are tiny.
The way to think of this is simple: you can't measure anything without subtle joining the circuitry that you are measuring and that has an effect on the properties of the circuit as a whole for which you have to compensate. In this case: the voltage measurement is going to consume a tiny bit of power and that is due to the resistance of the measurement apparatus even if it isn't in the main current path but a secondary one. But the people that do these kinds of measurements tend to be well aware of this and will pick their measurement gear and reference current to minimize the chances of that happening.
I sort of don’t get this either. If your testing equipment is introducing resistance can you not “tare” or calibrate your measurements by measuring the equipments current flow in a direct circuit between probes to determine its base resistance then introduce your sample and measure the difference? The resistance in a serial circuit is additive, no?
The parasitic resistance will be orders of magnitude larger than your sample resistance, any detector capable of detecting the sample resistance would be completely swamped out by it.
They are measuring voltage, not resistance (the resistance is computed). It's impossible to measure resistance without current and some voltage across the resistance, and while the parasitic effects are small the resistance we're talking about (and hence the voltage across that resistance) is so small that second order effects introduced by the probe wires can have a real effect, for instance if the probe wires have a higher resistance that makes them more susceptible to electric fields, which in turn would show up as a voltage. So this is anything but trivial.
If you pick your reference current properly this effect should be extremely small. It will have a little bit of effect but nothing that you should normally have to worry about.
My understanding is that synthesis of the exactly right crystal structure is very hard (for every 10 lead atoms, 1 - the exact right one - needs to be substituted with copper), so the samples are small and inhomogeneous. As a result measuring resistivity won't be illuminating until large amounts of perfect material can be produced.
Because (like someone explained very well in another HW notice) you can't measure zero ohms. Any device to measure resistance ALWAYS would have a minimal value that can measure. It's far more easy to detect superconductivity using the weird things that does all superconductors in presence of magnetic fields (ie, levitation).
They do both. But measuring the resistance of something that is that small is super hard because it will be close to zero anyway even if it isn't superconducting. So a larger sample would give much more conclusive results.
Without being intentionally snarky, the same logic applies to measuring current and voltage. What is observed is an effect of zero resistance, not zero resistance itself (whatever that would mean).
The korean guy say this is a 1d superconductor (as opposed to 2d sheet or 3d, which means you would need a single line of superconducting molecules from end to end. Its likely why it doesn't float all the way as only parts of it has these and scattered all over the chunk of solid.
However, if they perfect 1d production, they can layer in a bunch of them to create a quasi 2d or 3d superconductor.
It's a limitation of the most basic approach to the problem (measuring resistance), but not a limitation of every approach (such as measuring current).
You can inject a current into a superconducting coil and take measurements of the resultant magnetic field as the current circles for an indefinite period of time. I'm not able to see how this approach analogizes to water in a cup.
I don't think that works as advertised: Any measurement of the current will likely produce a magnetic field itself -- for example from the current in a Hall probe. This current will induct a small, opposite current into the superconducting coil. So the current will go down.
And even if not, what you would need is to measure the change of the field over time. This has finite resolution, so you can't distinguish no resistance from very very small resistance.
There are several tests that have been ongoing over decades doing exactly this. The small loss of current during measurement occurs, but you only engage with the field intermittently, and you can calculate approximately what the loss should be.
Yes it could be some tiny resistance, but the same issue occurs with the resolution/accuracy of the voltage or current measurement you would make.
Exactly my point. You cannot measure that the resistance is exactly zero, you can "only" give an upper bound. The upper bound depends on your approach, but no approach can give you a zero upper bound.
Would one method (assuming a pure sample) be to pass a high current through it and measure if it heats up? I know nothing about this, but that would be my first guess.
The paper shows a clear phase transition to diamagnetism as the material is cooled. That would be seen in superconductors and not in regular diamagnetic materials. I'm not aware of any that have that kind of phase transition. Though since we're in weird territory here, it's important to note some weird non-superconductor behavior that's beyond regular diamagnetism might be going on as an alternative explanation. But it is weird.
ETA: also, in the presence of a magnetic field, that transition temperature decreases. That's pretty huge. Unless this paper is fraudulent, I take this as the biggest positive evidence so far that something besides simple diamagnetism is going on. And, cards on table, with the assumption that the paper is not fraudulent, this pushes my odds above 50% for the first time.
Magnetic transitions as a function of temperature are not unheard of, and it makes sense for them to depend on external field (they are magnetic after all). Lanthanum cobaltite for example has transition from diamagnet to paramagnet, likely due to change of spin state (see, e.g., [1]). I'm not saying that's what's happening, but transition (if it is there, hastily written papers tend to have subtle inaccuracies) doesn't rule out non-SC diamagnetism
Can one know how many people are in that market? When prediction markets where mentioned early during the LK-99 story, I looked a bit around out of interest, which platforms exist, which topics they cover, how many people participate in that markets. Where I could find the number of predictors, it seemed pretty underwhelming - one, ten, fifty, a hundred, I think the highest I ever saw was a bit over a thousand, but even then less than a hundred or so where using real money, the majority was using play money. Did I miss the place where everyone is? Is this to expected because the are so many topics you can bet on? Are the resulting prices still meaningful even with a small number of predictors only?
I wanted to bet 10$ or so, but the two links above both seem to be some sort of cryptocurrency thing. I dunno, isn’t there a way to gamble on the future that isn’t so… shifty?
How did I imply that there are many real money prediction markets, that was exactly my question, did I miss the ones where everyone is or are prediction markets just not that popular and that is why even the big ones do not have a large number of predictors.
There is just one real working prediction market: Polymarket. However, it is currently illegal to use in the US, because of old laws. This can be circumvented with cryptocurrency, but most Americans still won't use such a hack. Which strongly limits the popularity of Polymarket, as US Americans happen to be the people who are most interested in prediction markets.
There are also the prediction ... platforms Metaculus and Manifold, which are legal in the US, since they only use made-up internet points instead of real money. I'm not sure whether this even hurts their accuracy compared to Polymarket though.
If enough people cared, you could probably make the same argument that got sports betting into a place of legality and get at least some subset of prediction markets legalized.
The sportsbook argument was that sports betting involves more than random chance - knowledge of the game, the players, etc. effectively turn it into a matter of skill.
I doubt there are enough interested parties to bother for prediction markets, though.
They got about 50 degrees C, but note that the bulk sample and the more pure tiny sample are different.
It definitely contributes to the general feel that however this works, it's an inefficient synthesis that's problematically generating the material we want.
It certainly makes me wonder if something like molecular-beam epitaxy would be able to directly grow a more pure sample (but I imagine that's expensive and time-consuming to setup, plus not really what we're hoping for if we want to use lots of it).
326K (+127 deg F) and 340K (+152 deg F) for their less and more pure samples respectively. And, yes, if it's a superconductor as we understand them, that's the temperature at which it would have zero resistance as well.
It's obviously a hyperbole. But if we once were to use these SCs to transport energy from solar farms in Sahara, it might have to operate in that temperature range.
Five meters down in the desert you're under 25 degrees year round. It's the average that matters for soil temps, not the peaks. You can approximate this by taking the average air temperature and defining the surface as the top five meters. Below the 10 meter mark the day/night cycle influence is pretty much negligible (but that would be more costly).
LK99 isn't necessarily a stand-in for helium cooled superconductors - if all SC were equivalent to any other, YBCO and similar would have been enough to make MRIs significantly cheaper - LN2 is easy and cheap to produce.
First it shows a temperature graph vs moment, as they heat it it loses the diamagnetism around the temperature LK99 is said to be superconducting.
Second only a superconductor will have net-zero field, which means "stable" levitation. In the video they approach the sample with the magnet and flip it while the piece is mostly "in place". A regular diamagnet generates a external field that "follows" the field applied so it would likely move sideways, that is why to "levitate in place" a diamagnet people normally use a Halbach array.
EDIT: A Halback array is made alternating the poles N-S of the magnet, so that forces of repulsion created by the diamagnet cancel. This is why you will see people using multiple magnets when levitating pyrolytic graphite.
They have quantitative magnetization-versus-temperature data taken using a PPMS (an automated physical property measurement system). It shows strong evidence of some kind of diamagnetic transition at ~ 320K. It seems very likely now that this material has some kind of interesting magnetic property, whether or not it's due to superconductivity.
Noob question: if whatever is happening is strong enough to raise one of the ends of the sample, why doesn't raise both? After all gravitation is many order of magnitudes weaker than electromagnetism. Did they calibrate the setup to closely match the gravitational force on the sample? Why not push a little more and make it fly up the the cap of the container?
Magnetic force scales as 1/r^3, not 1/r^2 like gravity. That's why your standard issue fridge magnet measurably attracts stuff only from a very close distance, but when it does, it easily counters the gravitational attraction of the entire planet¹. This 1/r^3 relationship can be derived easily enough by integrating, but essentially it's because magnets are dipoles and the farther away you are, the smaller the apparent distance between the poles and the "more neutral" the magnet looks like.
Anyway, that's why there's an equilibrium distance where the forces balance. But superconductors also exhibit a very strange phenomenon called flux pinning [1] where a levitating object is held in place by magnetic field lines and you can even turn the whole thing upside down and it still levitates even though the forces don't cancel each other out anymore!
> Magnetic force scales as 1/r^3, not 1/r^2 like gravity
The magnetic force and gravity are two of the four fundamental forces, no? The others being the strong and weak nuclear force? At what rate do those two scale at?
If you are really close to one pole of a magnet and far away from the other then the force scales like 1/r^2, it is in the far field under the influence of both poles of the magnet that it scales like 1/r^3. Electrostatics scales like 1/r^2 (because you fet isolated charges).
The other two are a complicated story.
If you have time, I would love to hear that complicated story.
I remember once hearing that the rate of those forces' decay indicates (but does not prove) that they decay in more than three dimensions. This was an accomplished chemist talking, so I'm sure that she was being concise but factual.
I might not be able to give a super coherent picture but I will have a go. The force that hold nuclei together is based on the residual strong force, it is a complex system so the models used for it tend to be semi-empirical (yukawa or reid potential). They go from being repulsive at short distances to being attractive with something like an exponential decay as you go further away. The force is very strong at 1 femtometre and pretty much negligible 3 femtometres away.
This is only the residual of the strong force though, which is what you can think of as holding quarks together. This force is very strange in that it doesn't get weaker the further the quarks get apart, but if it gets far enough then the energy will create new particles. That's just the strong force, I will leave it to someone more knowledgeable than me to try to explain the weak force.
For a complicated shape, the formula is (something like[1]) F=A/r^2+B/r^3+C/r^4+... , where r is the distance to the "center" (and some parts are closer and some parts are more far away).
An important property of magnets is that A=0, so the r^2 term dissapears. And in many cases it can be simplified to F=B/r^3 [1, again].
weak and strong forces are mediated by particles that have mass and therefore their force falls off basically exponentially, as exp(-m*r)
it's not really the same as decaying in extra dimensions, because in that case the force law would look something like 1/r^(n-1)
The claim here was that "magnetic force decayed by 1/r³" which is the result of seeing a dipole ("two sources") from far away. Each pole decays by 1/r² and the net result (the combined effect of both poles on a test particle) is proportional to 1/r³
> That's why your standard issue fridge magnet measurably attracts stuff only from a very close distance, but when it does, it easily counters the gravitational attraction of the entire planet
You can still get that effect if both forces are r^2. You can even get it with the same force—consider standing on the moon looking up at the earth. It will just be a more marked change with r^3.
Nice! For show, I would try to levitate. Probably using a stronger magnet, inside a small (tiny?) glass tube, so it slides up a bit. I wouldn't do "rock surgery" just yet, to remove dead weight. Levitation with 4 magnets in a checkboard fashion, as usually done for pyrolytic carbon sheets, maybe doesn't work as the sample is not flat.
Also, turning the magnet upside down seems useful. And then, heating up to show that it drops at a certain temperature. I wonder what would be needed in this case; I guess less than 100° C.
In any case, the "show" part is important. Good video quality is important.
Reading further below, I'm actually amazed by all the theoretical studies that have already been done from first principles (all apparently supporting the possibility of superconductivity). That's a pretty fast pace for science!
Magnus Carlson said that if you tell him there’s a winning move on a chess board he could find it very quickly, because his focus becomes extremely narrow.
I wonder if something similar is happening here. Since the scope was defined as LK-99, it becomes a narrow query instead of a broad one.
This is not just telling someone there's a winning move. It's telling them that X is a winning move and they just have to verify that X is indeed one. Many, many problems are such that it's difficult to find a solution but easy to verify one.
I know nothing about this but I stay curious. Just like Navier-Stokes equations can be proved with numerical approximations, can this be verified even if we never solve it?
This is the first time in years where I physically have goosebumps the more this seems to be verified. In a good way.
The potential changes this can introduce is equivalent to when Faraday, Volte, and all the other 17th/18th century scientists started figuring out how electricity works. They had no idea how much it would change every aspect of life in the century to come after them.
There is a reason why room temperature (ambient pressure) superconductors have been one of the holy grails of physics for such a long time: the implications are profound.
I used to be a particle physicist, and some of the more complex systems were just those used to cool the superconducting magnets down to cold enough that they become superconducting. If you can do that at ambient temp, you don't have to bother with that entire system.
Also: fusion reactors rely on superconducting magnets (or if you are JET live with the fact that you can only run your magnets for a few seconds before the overheat), so can have a large impact on future fusion reactors.
I'm already rooting for CFS' Sparc over ITER anyway. ITER is too big & expensive, if Sparc works it's an indication that we can build fusion "on a budget", too.
Idt CFS will care because they've already sourced much of the HTS tape that they need and their schedule doesn't permit waiting for LK-99 to be proven/disproven and all the years it will take to produce LK-99 in bulk anyway.
It's so insanely cheaper, because not only do you not need the cryogenic cooling (and all the support and maintenance systems and consumables like helium or nitrogen or whatever), but LK-99 is made from relatively common elements versus rare earth elements.
I get the idea of superconductivity at room temperature in theory. But I don't have enough knowledge to understand in real terms, how will the world change if this is true?
You can't get CT scans very often, because they hit you with large doses of ionizing radiation. Ultrasounds are low-resolution spotlights; you shine them at a particular spot to diagnose something specific.
With this you could get an MRI at your annual checkup. You could diagnose all number of diseases like that, not to mention 95% of cancers. Each year your scan is automatically compared to the previous year, and any sudden changes in morphology can be biopsied. The learning would be revolutionary for medical science as well- right now we have so little data on what kinds of benign growths people have that our best method for figuring out if a mass is a problem is asking if there are any other symptoms. Not to mention entirely new kinds of medical devices would be possible, eg using SQUIDs.
Ground-imaging MRI would also be revolutionized. Archeology, paleontology, geology, mapping resources and finding minerals would experience a quantum leap. You would be able to drive a car through the desert and spot fossils or faults or mineral signatures.
Space travel would become essentially free with the use of launch loops. Which would also make long-distance travel incredibly cheap and practically pollution-free. You would need electricity alone to reach low earth orbit, or to accelerate planes to multiples of the speed of sound.
Grid-level storage, peaker plants and load-following would become nearly obsolete. Superconducting catenaries would connect every nation on earth. Normally plants have to turn off when everyone goes to sleep; now factories in China can be powered by US fission. Canadian homes could be kept warm by Australian solar. HVDC interlinks would be obsolete. We might eventually transition away from AC power entirely.
CPUs could be anywhere from 10% to 50% more efficient. GPUs even more so. Fires, particularly house fires would become less common as wires simply stop conducting when they are overloaded.
> We might eventually transition away from AC power entirely.
This is actually a really good point I hadn't fully considered, but it's right: the primary reason we use high voltage anywhere is because it minimizes resistive losses (and the reason we use AC is because it's easy to transform between voltages).
But most of the stuff in my home doesn't need high voltage - it's all running at 5V or 12V. Or it's a motor which is magnetically driven and depends solely on magnetic field strength (which is independent of voltage).
If all your conductors have zero resistance, then high voltage is obsolete. You could safely run a residential property on 12V power. Home electrical hazards would a thing of the past.
This is drawing completely wrong conclusions from erroneous oversimplifications:
We're using high transmission voltages to keep current down. Superconductors would not change this AT ALL; superconductivity generally breaks down not only with temperature increases but also magnetic field strength (i.e. current).
Switching large currents is also a hassle; especially with non-resistive loads.
And completely changing household electricity architecture is simply not gonna happen just to marginally improve safety, cost/benefit ratio is WAY too high.
A superconductor running at high amperage requiring more superconductor is still a superconductor. The losses you take are zero.
Any amount of cross-section of copper though is not - you take losses at (I^2)*R. You lose power as a square of the current.
There is an enormous difference between using superconductors at high currents and using any normal material.
Obviously the impact of this depends on what the critical current of a hypothetical room-temperature superconductor ends up being...but REBCO tapes achieve current densities of >40,000A/mm2 (at 77K). Depending on what you end up with, the expense and danger of maintaining the high voltage infrastructure could easily be seen as not worth it - particularly if it speeds up the ability to build out and maintain power lines.
Sure, but transission losses are generally a low single digit percentage-- eliminating those will not have much impact, but on the other hand your superconductor is EXTREMELY unlikely to be even close to cost competitive with aluminum/steel core wire.
Even if you could achieve critical currents comparable to conventional high-temperature superconductors at ambient temperature (which appears *highly* doubtful!), keeping high power transmissions lines at human-survivable voltages would be a tremendous waste of super-conducting material.
And even inside homes it seems quite farfetched to me to scale down voltages-- nobody wants to use plugs and switches rated for 200 amps just for their cheap toaster...
Yes, and for short haul that works fine. But for really long haul it doesn't, hence HVDC so that's what you compare with: the situation where it makes a difference such that extra cost incurred doesn't immediately invalidate your option. HVDC is much better comparison material than your average overhead powerline. For the same reason we don't compare bicycles with trucks for long haul cargo but we do compare bicycles with cars for shorter distances and personal travel.
NordLink flows 1400 MW. Wholesale electricity in Germany is roughly $105.
365x24x(1400x.07)x105 = $90 million per year. Adds up to the cost of the total project every 17-22 years. Over 20 years it's $1.8 million per km. If the superconductor is 20 kg/m (2.4" or 6.2 cm width, huge), that's $90 per kilogram.
10x the cost of copper.
It's interesting to see how many assumptions about our world are underpinned by the lack of superconducting material. That also immediately gives you an idea of how transformative (heh.) room temperature superconductors would be.
> But most of the stuff in my home doesn't need high voltage - it's all running at 5V or 12V.
85 volt DC carries the same power as 120 volt AC, but 85 volts DC is essentially safe to touch. The human body has a much lower AC impedance, so it's MUCH more dangerous. DC does still hurt, though.
40-80 volts (see also: split phases) DC is very convenient for most electronics. It's really just things with batteries that want 5-12 volts, but stepping that down isn't too hard.
At the grid scale, it's a question of which is cheaper. If the infrastructure becomes much more expensive (because the wires are SC) then you can save money by using DC (which gives you 41% more power). If its cheaper to use transformers than it is to use more superconductors and semiconductors to convert voltages, they'll do that.
Either way the grid would stay relatively high voltage (10s of kV), because it's just always going to be worth it at that scale to minimize the conductor area.
We use AC because changing voltage levels with it is extremely simple and efficient compared to DC.
In fact the only (practical) way to convert DC voltage levels is to convert to AC, do the level conversion, then convert back to DC.
Believe it or not, DC already is more efficient for energy transfer and why there are already DC high voltage transmission lines. You don't have to deal with reactive parasitics.
But again the killer is that AC voltages are so easy to switch and can by done with >99% efficiency.
Full body MRI scans are only expensive in the West, outside the west you can get one done for $250. This is a labor and regulatory capture problem and not a technology problem and will not be affected in any meaningful way by better superconductors.
Even if it were $250, that's a pretty high cost relative to the annual cost of insurance. It's too high to justify as a routine diagnostic tool. The financial benefit here is that earlier identification would save money in the long term treatment. MRIs don't cure cancer, so the direct benefit only applies to the limited savings on a very small subset of people of the people who actually get cancers that could be identified earlier.
The real benefits are indirect (from the viewpoint of the insurance people who unfortunately pay for it)- quality of life is much better if you catch it earlier, and the medical research benefits are huge.
Realistically, it's also not $250 even outside the US- not for the resolution needed to diagnose cancers. That's below the depreciation cost of a high end (say $1M) machine. 12 scans a day (it takes roughly an hour for an average scan, 12 is per day per machine is pretty average[1]) 7 days a week for 10 years is 43,800 scans. So ignoring interest, labor, and absolutely everything else that's $228 per scan.
A full body MRI takes an hour only for small patients. More realistically 1.5-2 hours.
I know, but my point is that the price doesn’t scale with machine and helium costs, but with labor costs and the level of insurance racket a particular country has.
Obviously it’d be great if we didn’t require helium supply chains to make medical scans, but unfortunately we can’t fix everything with technology alone.
The cost of the machines will drop considerably so it definitely will have an effect. That $250 is still a lot of money for many people and if not covered by insurance and in the developing part of the world it is utterly unaffordable.
> Fires, particularly house fires would become less common as wires simply stop conducting when they are overloaded.
I don't know enough about how this material behaves, but a superconductor "quench"* can be pretty catastrophic. I could see a room temperature superconductor battery causing fires from a quench.
we could do annual scans with current tech. the fundamental limitation of MRI is proton relaxation time, which limits the sampling rate. the path to reducing scan time and thus cost is to use a more sophisticated reconstruction method to reduce the number of required samples. this is being worked on.
i don't have any data here, but I am dubious that a room temperature superconductor will bring down the price of MRI machines. a room temp superconductor only saves you a dewar, about $50k of liquid helium and a cryocooler. you still have to build the rest of the MRI, which is an _extraordinarily_ sensitive instrument
> Fires, particularly house fires would become less common as wires simply stop conducting when they are overloaded.
There is a lot of infrastructure around the MRI in order to support liquid helium storage, cycling, and inter-device pathways. It isn't just what you see in the room.
It might still be a large machine, but a bunch of bottlenecks disappear. With that, it is only a matter of time until a startup develops a much cheaper, smaller, and more efficient device.
Philips is a big player making the assumption that the next generation of MRI will be smaller, cheaper, and more widely available. But, I can confirm smaller players are operating on that assumption as well.
CMIIW but the main thing to make acquisition times more reasonable is higher magnetic field strength. Which, leaving all the technical questions of achieving it aside, comes with other fun constraints like requiring heavy shielding for the room and of course very careful control of what kinds of metallic objects can go near it...
If feasible and indeed not as expensive to produce these materials, high potential for:
- higher efficiency turbines and solar panels - more clean energy for the same investment
- fusion?
- low-energy computing at higher performance, as we learned recently LLMs so far can't take advantage of hitherto zero marginal cost of software anymore
- democratization of advanced quantum computing?
It's all very exciting and in a truly replicable and industrially-feasible scenario I'm starting to feel this could be another 1960s kind of rate of change. One can dream, no? Maybe we can finally get rid of all the doom & gloom stories we tell ourselves and actually do something with these unexpected presents of our times? Think smartness instead of ignorance, (old) Star Trek instead of the latest Fallout fantasy on the horizon? Why not?
These and many more consequential innovations might develop just in time, as climate change is coming at us much faster than we are willing to admit (don't look up).
That said, even with all of that (including fusion) we will still need to cut our co2 emissions; drastically change our lifestyles / minimize consumption and deal with already locked in impacts hitting us sooner than later.
Enthusiastic midnight edit:
Also what's up with graphene based ICs and optical computing advancements? Competition of new old ideas finally come to be realized? What's next? I want a new breed of superconductor enabled Lisp Machines by 2030! Why not home brew "3D print" the whole thing? That should be the ultimate target here! The handling of "open source" lead would probably suck though %D.
I guess Alan Kay wouldn't be enthused by such a Lisp Machine renaissance in principle yet still stand with his "the best way to predict the future is to invent it" credo.
Let's predict a future for a planet that shifts back into balance!
You know all those bird scooters people leave lying around cluttering up the sidewalk? With a room temperature superconductor they will become hover scooters cluttering up the sidewalk!
And all of the high voltage transmission lines we want to build but can’t because of permitting reasons would have zero energy loss if we actually built them, which we won’t.
Yes and no. There's no possible way to have a superconductor be an actual hoverboard out in the broader world - the magnetic field of the earth is just nowhere near strong enough. You can make specialized areas where it would work, though. It's even been done already with LN2 cooled superconductors - https://www.theverge.com/2015/8/4/9091951/lexus-hoverboard-v...
SC could help enable nearly lossless transmission over power in HVDC lines, but HVDC lines are already significantly more efficient than our regular ones and we don't build them for a variety of reasons, so it might not make much of an impact there for regulatory/NIMBY/etc. type reasons.
We would first need to build a superconducting track for the hover scooters (containing magnets) to ride on. Assuming we ever figure out how to mass produce the superconductor inexpensively.
Perhaps if the track were arranged in a grid-like pattern, the scooter could use superconducting electromagnets to accelerate and steer.
I would actually welcome scooters that only work on specialized tracks. Walking safely on the sidewalk will become as easy as staying away from those tracks.
I don't think anybody can really say for now, but if the price of it comes down enough, there are two rough categories of things that can happen:
* devices that currently use superconductors don't have to use cooling anymore, and so become much cheaper to build and operate (MRI machines, certain sensors, high-power magnets for things like fusion research, big generators, big motors). This is a pretty solid bet.
* devices where superconductors would be an improvement, but currently don't make economic or practical sense. These are almost certain to crop up, but which ones will pan out is IMHO very speculative.
In the latter category, things like computing chips, more sensors, certain art works (sculptures with permanently levitating parts, how awesome!), smaller motors and generators seem plausible.
But there is likely whole categories of things we haven't thought of that could benefit from either zero resistance or rejecting magnetic fields.
It means we won't lose MRIs when the earth finally runs out of helium as they require liquid helium to run the superconductors that generate the magnetic field. We're running out of helium with no way to replenish it. Helium is the only element on the periodic table which is a non-renewable resource on Earth.
So MRIs will get much cheaper, and they could end up being as cheap as taking an x-ray today.
Based on our current data LK99 isn't a replacement for helium cooled superconductors for the same reason YBCO and similar aren't - critical current/critical magnetic field aren't in the right ranges for what we need out of MRI machines.
I do think it's too early to say one way or the other what all of this ends up looking like, so we might find that purer/larger samples have better properties than what was measured so far, or the discovery puts us on the trail of other RTAPS in the same class that might be better for these purposes.
I never claimed LK99 had the properties needed for MRIs, the question I was answering was "how will the world change with room temperature superconductors", and my answer is valid in the context of the question.
If the earth runs out of helium we will just use low field MRIs that don't require superconductors (see eg https://www.nature.com/articles/s41467-021-25441-6 ). Their resolution is lower than that of high field MRIs, but they still seem to be a useful diagnostic tool.
I suppose you could count elements like gold, too. Since we technically can produce extremely radioactive gold in small amounts but the cost is so prohibitive it would never make sense to.
In a thousand years people are gonna look back at us idiots filling balloons with helium and letting them disperse into the upper atmosphere and shake their heads at how stupid we were.
I think you mean lossless transmission? If we can actually replace transmission lines effectively with it, unclear whether it's practical for that yet.
I'm thinking coil guns instead. Room temperature superconductors don't solve the rail erosion issue with railguns, but I think should greatly increase the performance of coil guns.
My thinking is that zero resistance through the projectile itself and through the rails would help, but you still need to make an electrical connection between the projectile and the rails. Either this is done with a plasma arc or physical contact, but either of these causes erosion of the rails even if there is no electrical resistance through the rails or projectiles. Am I missing something?
Coil guns seem like they'd last longer, wonder what the performance figures for coil vs rail are.
For future rail guns, they'd just have replacement rails available as they do barrels for tanks/artillery guns (that wear out after about 1000 shots afaik).
if he means that, I do not see it happening so easily,
Energy storage in a superconductor is done in the form of magnetic field, a superconducting induction coil (SMES), whose density of energy storage per kilogram is highly inferior to a capacitor and a supercapacitor, which stores energy in the form electric fields, and whose density of energy storage per Kg in turn is very inferior to chemical batteries.
The magnetic fields are charged and released quicker in inductors than in capacitors (and than in chemical batteries), also the material have a longer life, and the rate of self-discharge is sightly inferior in superconducting inductors, nevertheless the density per kilogram -and to administrate such sudden energy release- limits very much the applications.
If LK99 becomes true, and is improved much (as the electrical current in the paper is limited to milliamperes range), at middle term I don't think on it happening.
If at future is achieved superconducting through nanowires, with inferior weight to batteries, may be, but in a car for example would need the added weight of metallic "magnetic shields" for health security.
IMHO, I don't see it beyond stationary applications.
This is consistent with my understanding as well. I don’t see instant battery charge being something that this enables. Current battery charge rates are limited by battery structure degrading more rapidly at high charge/discharge rates, so this wouldn’t really have any direct effect.
One somewhat ironic fact of this story is that the rogue former employee perhaps deserves some credit for bringing this thing out into the world.
They had been working on this for quite a long time themselves, rightfully so. But now the whole world is working on it and exploring other methods and combinations of materials I would assume (to improve upon the original design and avoid any patents).
How long would they have kept this thing to themselves without the rogue employee bringing this thing to light?
Perhaps that was part of the rogue former employee's motivation in "going rogue": that this thing needs to see the light of day so it can start to benefit humanity.
I think there are multiple big reasons they didn't publish sooner. It is pretty clear that they were convinced, but had insufficient evidence to convince others. Additionally there had recently been a sensational fraud in their field so journals would have been extremely sceptical.
It seems to me they were just in the process of constructing a convincing paper, which included convincing tests and could have been acompanied by sending out samples to independent labs.
Then they were essentially forced to put out what they had, which made their claims even more unconvincing.
It's also worth mentioning that room temperature superconductor "discoveries" happen a lot, and have always turned out to be a disappointment.
It's been considered to be within the same bermuda triangle of scientific vaporware that demarked by quantum computers performing useful tasks, fusion power generation, that sort of thing. Always just around the corner and dude trust me it totally works in my lab. It's basically a meme in physics.
Room temperature superconductivity is not demonstrably impossible like a perpetual motion machine, but mainstream press picking up on yet another set of amazing claims in this field tends to (deservedly) lead to a lot of eye rolling.
The consequence is that if you've made actual progress in the field, you'd better be very sure you are right and able to back it up.
Talking about fusion, isn’t the major part of the Tokamak plant in Cadarache in France, dedicated to cooling down the superconductors to zero temperatures?
If we make a leap in room-temperature superconductors, do we also make a leap in fusion?
It would help with one (out of many) problems in that field, sure. Though it would likely depend quite a bit on the nature of the superconductors we're able to produce.
What’s the track record of open prediction markets (no barrier to join) vs super predictor markets (have some requirements - be it credentials, proven expertise, or track records)?
I keep seeing markets like this posted for LK99 and personally find it unconvincing more than a sentiment analysis of the twitter and news hype cycle.
We're not, and don't think there's a physicist in the world who wouldn't advice tempering expectations at this point. Yes it would be exciting if it's the real deal, but it's also far from the first candidate we've seen and so far they've never been replicated.
An argument probably could be made for normal run of the mill improvements and confirmations that make the scientific state of the art move inch-by-inch versus such ground breaking discoveries that has the potential to help the world to such a big degree. They could very well write their finding in a way that we claim we discovered that this works, these features work, but we suspect these may also work, and XYZ needs to be tested further and those are reserved for future work either they themselves are doing are can be sped up by others participating. They still get the claim as the pioneers for the discovery(rightly so), and still accelerate the rate of practicality.
The only problem would be if they worded it in a sensational way without evidence to backup their claims like almost all battery tech seems to be these days "we discovered a solid state battery that will change the world, make EVs, flying planes, ships, trucks and remote control toys orders of magnitude cheaper, faster, safer blah blah ** once we figure out how to get it working in real world conditions and test it for real. We were talking about possibilities extrapolated from our little theoretical progress."
I still believe the scientific community is smart and moral to accept statements held true by evidence. Scoffs are reserved for hyperbolic claims.
To make my argument, see the recent paper on achieving energy positive nuclear fusion - the authors didnt wait until they could achieve Nett energy positive condition (where total energy to the system is lower than what was produced). They published when they achieved energy to the reaction < energy output from the reaction which was a big deal in itself eventhough the practical goal would be achieved in future by building upon this..
Evidently they did have sufficient evidence. With such a big potential payout of such a world changing material, others have invested a few days of time into verifying their rushed paper and indeed their weak evidence was enough.
Again, with a payout so big and ease of replication fairly low, the evidence doesn’t even have to be that convincing.
> I think there are multiple big reasons they didn't publish sooner. It is pretty clear that they were convinced, but had insufficient evidence to convince others. Additionally there had recently been a sensational fraud in their field so journals would have been extremely sceptical.
But hasn't it been 24 years since they discovered LK-99? Or am I missing something? If that's true wouldn't it be a tragic shame that it wasn't revealed earlier, so that progress and applications could occur?
It was originally discovered in 1999 but was shelved for over a decade due to insufficient funding. As I understand it, they only received funding in 2018 and became more confident in their discovery after that.
I agree, the world still doesn't really care about science. And yet everyone's living lives that are all the more better because of science.
It's a human problem, we only care about what happens to us or our tribe and most of our focus is not very forward looking. We're better than other animals as evolution favoured us looking just a little more forward than other animals, but now we have to escape the timekeeping of our meaty flesh and think "if we dump money into science then this generation will get some cool stuff and the next will get a shit tonne of cool stuff".
Exactly, the 99 stands for 1999, so potentially this material has existed for over 20 years and we could have started testing and replication on a much wider scale a decade ago. What benefit have we gained by disseminating this research so slowly?
This reminds me of an Asimov novel (don't remember the name)
In this universe there are long life humans that don't really collaborate with other humans (like a single person, not a group of quasi immortal humans) in the scientific field to have all the credit because who cares if you take 200 years for a discovery if the glory is all yours
Maybe The Naked Sun or one of the books in that series? It’s been years since I read it but I remember there being some motif about people not interacting physically and living a long time.
> What benefit have we gained by disseminating this research so slowly?
It's less visible, but at this scale there's a high cost in everyone pausing their research to replicate LK-99. If it turns out to be real that's worth, but if not it could've gone to better use. Think of what we could've done with the time we spent reproducing cold fusion.
It's survivorship bias; yes, you would love to see the significant discoveries happen sooner, but if they were rushed you'd also see a lot of trash that currently never leaves the draft stage. Just look at how many people complained that the LK-99 papers were incomplete and unprofessional!
Also, LK-99 was discovered in 1999 but wasn't known as a room-temperature superconductor then. There are too many things to research and not enough researchers, so it was put on hold for two decades.
> It's less visible, but at this scale there's a high cost in everyone pausing their research to replicate LK-99. If it turns out to be real that's worth, but if not it could've gone to better use. Think of what we could've done with the time we spent reproducing cold fusion.
This is a (I feel rudimentarily obvious) fallacy of finite time implying there is N researchers working on one thing that must be useful at any given time. The individuals reproducing cold fusion / LK-99 / whichever might not have been spending their energy / time / funding on anything more "guaranteed productive" (nor even on anything at all) otherwise.
Knowledge motivates research. That motivating is not a classsically scarce resource, its multiplicative.
Most PhF students a desperately looking for a usefull thing to work on. Most professors desperstely search for a usefull topic that they can use to justify grant funding. etc.
I'm sorry but this is BS (and I don't mean that as a personal attack, just a repulsion to the idea)
As if researcher must drop everything they're doing and attempt to reproduce every wild claim. It simply doesn't happen because these people are able to work through hypotheticals and form value predictions... Just like they do with any other research resource allocation decision.
No, it's far worse to lock up knowledge behind academic institutional norms and hide and yell "rigor" when anyone is curious to see what's being worked on. People can decide for themselves what is worth pursuing and the OP is likely correct that we're doing damage to our own progress by not disseminating a hypothesis that's being worked on but could use more resources. Furthermore this fear of someone publishing before you in a race to claim the Nobel prize likely doesn't help.
The institutions should formalize and normalize a method for broad open access to research in progress and broader collaboration.
Seems extremely unlikely there's a "high cost" because it's already clear there's some strange properties involved in LK99. Cracking the explanation for this could potentially help advance material science in all sorts of ways.
In some ways this is pure science. Multiple groups working on the same thing increases the chance of searching the problem space thoroughly, and not missing out on finding better local maximas.
Exactly, hindsight is 20/20. This reminded me of a friend of mine fixing a coding issue that was impacting performance. Fixing it saved a lot of money and it was the initiative of the engineer to bring it up. The response of the (at the time) CEO: well, why did you not fix it earlier?
> It's less visible, but at this scale there's a high cost in everyone pausing their research to replicate LK-99.
Less visible? More like invisible if you ask me. There's all sorts of seemingly frivolous research going on at any given moment. A red herring in this case would be no different.
The instant strange behavior becomes apparent it becomes worth researching. I don't see how keeping it silent is anything other than a tragic waste.
If it's bogus, we discover that sooner.
If it's legitimate, we discover that sooner, AND the benefits to humanity can begin 24 years earlier. Think of how much we've accomplished in the last 24 years. Now think of how many benefits could have been reaped during that time by building on this new knowledge (assuming it has practical applications).
If it's truly useful and they knew this 24 years ago: what a waste.
> but at this scale there's a high cost in everyone pausing their research to replicate LK-99
The people deciding to pause their research are experts with their own agency. If they are pausing their own research then it's because they've made an expert determination that it's worth doing so. So you don't need to worry yourself over it. Open the information up so that other experts can decide what to do with it.
I don't know if this is true or not, but I read something suggesting they produced it in 1999 as a side product of something else but didn't start to examine its properties properly until a handful of years ago.
It might be that the process to make LK99, produces a fraction of the Good Stuff, which went under the radar until recently. It would not have been the first time. If it is confirmed with the proper methodology ... right now it's too early to build castles on LK99.
With the anti-hype hype that surrounds this discovery this was to be expected. Why would I disclose this kind of research to the public so that the whole world can benefit from it when I'm called a fraudster or a liar. Why would I risk my reputation – no matter how I spin my research – in a sacrificial circle jerk when I can work on it stealthily and hope to make some bucks ?
Neither of them were doing research for the bulk of the time. We can only guess what brought them back to the project, but it may have restarted sometime around 2016.
> The Croatian scientists say that current will flow effortlessly through their material, a mixture of lead carbonate and lead and silver oxides, at up to about 30 °C.
[...]
> Danijel Djurek, a physicist at A. Volta Applied Ceramics in Zagreb, Croatia, claims that he discovered his superconducting ceramic mixture in the late 1980s. But he was unable to pin down the structure and formula of the material, and his research was interrupted by years of war, following Croatia's split from Yugoslavia.
I'd like to add that if this claim of having replicated the material with the reportedly poor description of how to do so is true then it really isn't that hard to replicate. I am surprised they haven't published a paper by now.
And this suggests that they were conflicted for some reason about publishing, likely commercially so (as evidenced about the patent). Maybe they were trying to create a product with it that they could sell before others could replicate the material.
So the rogue employee does absolutely deserve credit for bringing this thing into the world and humanity should not stigmatize them for doing so; perhaps we should do exactly the opposite.
people will claim all kinds of crazy, implausible shit in patents. you would go insane trying to draw conclusions from the set of applications sent to patent offices as a whole, it would be impossible to differentiate what is legitimately worth paying attention to.
I hate that the patent system went from engineers patenting a working, proven idea to business a-holes patenting an "idea" then sitting on their butts waiting for an engineer somewhere to prove the idea so that they can raise their hand and say "I have the patent on that".
There are actually provisions that are supposed to prevent that. It's called the enablement requirement. You cannot claim an invention that you don't know how to make. But in practice, it turns out to be easy to argue around that; there are so many non-experts involved in the process. So here we are.
Just learned about that the hard way myself, haha.
Some guy got a little traction on Twitter briefly for his room temperature superconductor patent a couple days ago, but when you click his name he's just some crypto nutjob with no science background
They can still recover from this reputationally very easily assuming egos are not in the way.
Accept it happened and make amends and make those things very public and move forward to making an official announcement together with the so called 'rogue' author (in my opinion he did the world a favour).
One, slightly more cynical, theory is that the rogue employee's motivation was to push the research out there in order to create an opportunity for them to engage in their own research or get funding to do that research by making public what they may otherwise have not be authorized to make public. It'll be interesting to see if there are any threats over an NDA or trade secret violations that come out of the original, premature, publication.
Everyone's assuming it's about the Nobel, but it is much more likely it's about far more lucrative goals.
I think an interesting outlook on these situations is that if you believe someone has the right values (e.g. earning money and earning respect from peers) then greed towards those values is a nobel pursuit.
If society can't make greed go away then maybe it's better to celebrate human nature and contextualize/guide it into being pro-social.
L&K, are two PHDs taught by their now dead former professor. Their research into LK-99 is inherited from their former professor but they lacked funding, hence it was a hobby project for a long time, and they needed funding to work on it full time.
The Korean professor: A professor who started taking interest and sponsoring the project for the last few years. However, he likely didn't participate directly in the research much.
The American professor (Ethnically Korean): A professor who was brought in rather recently (Maybe last 1-2 years?), to help bring credibility to the team. He is probably the one with the expertise to perform measurement and ascertain that it is a superconductor.
The Korean professor got kicked out from the team for unknown reasons. This was evidenced in the April paper published in Korea, which lacked the Korean professor's name. Quantum center also removed him from the website.
4 months later, he was back with a vengeance, and published the paper, with only him, and L&K on it (ie, the 3 man paper). The Korean professor possesses no sample, and also lacks the original data for LK-99.
A day later, the american professor hastily rushed out another paper (The 6-man paper), with L&K, himself, and 3 irrelevant lab assistants.
Since the earlier Korean paper in April got 0 attention and was treated like a typical troll paper. They probably assumed posting on arxiv was just good material for later legal battles, but they could otherwise continue happily researching and patenting their material.
However, interest in LK-99 exploded like a nuke, because the first signal for credibility: That two professors were in a bitter battle over authorship, did not escape people's eyes.
2 days later, the Korean professor had a presentation at a Korean conference, but attendees were disappointed by his data and lack of samples (Prediction markets hit a low of 15% at this point).
However, the American professor took control of media communications, and quickly dissociated with the Korean professor, yet he completely stood by the claims of LK-99. Now the American professor has talked to numerous Korean media and directly sent an exclusive video to NYT.
Its interesting why L&K chose the American professor over the Korean one. Needless to say, if this works out, the interpersonal saga will end up in a Netflix documentary, and probably many court cases.
I should also note that the nobel prize is likely a secondary concern for the group. They aren't in a comfortable academic position where money is only a bonus. They were struggling PHDs for a long time. Nobel prize money is too little, and will come way too late to make a difference for their lives.
L&K and the American professor are likely trying to maximize the value of their patents and know how to sell to investors, rather than to make public verification ASAP, which would be optimizing for a nobel prize.
A hash like a trapdoor, impossible to get properties of the face back? Or a hash in the sense of not a picture but just some important parameters, but still reproducible?
Would work for many using whole genomes. CpG methylation patterns could also work.
"Despite the important role that monozygotic twins have played in genetics research, little is known about their genomic differences. Here we show that monozygotic twins differ on average by 5.2 early developmental mutations and that approximately 15% of monozygotic twins have a substantial number of these early developmental mutations specific to one of them."
This is already in the 6 man paper, that's how the internet deduced their story:
"ACKNOWLEDGEMENTS
We acknowledge late Prof. Chair Tong-seek for initiating research of a 1-dimensional superconductor of
over room temperature at atmospheric pressure. In particular, his enthusiasm on superconductor study
impressed many researchers. "
If this works out, this will be by far the biggest invention from Korea ever. They will be remembered in every Korean textbook to the end of time.
The alegation is that the person in question worked with them on the research, and they published it on arxiv without the permission of the others. Thus they can be a co-discoverer and also rogue former employee.
Supposedly he worked there until four months ago. I am just repeating stuff I read on the Internet and I don't know that we know all of the facts just yet FYI.
We will never know, but maybe they receive some governmental pressure to never publish and pretend the research lead nowhere so that their country only gets the benefit.
But the rogue scientist wanted to share it with the entire world.
In the early 90s, Prof. Choi theorized a new way of creating a room temperature superconductor but it wasn't really accepted by the mainstream. Supposedly, in 1999, two researchers in Choi's lab, Lee and Kim (hence LK), created the first sample of LK-99. They weren't able to reliably replicate nor were they able to fully explain the theory.
In 2017, as a dying wish, Choi asked Lee and Kim to continue on with the LK-99 research but asked to not publish it until they understand the theory behind it.
They founded a lab and collaborated with other scientists in the field and Kwon was one of them.
4 months ago, Kwon left the lab and two weeks ago Kwon leaked the paper. The paper Kwon leaked only listed Lee, Kim and Kwon as the authors and excluded other collaborators. Some suspect Kwon excluded the other collaborators because Nobel prize can only be awarded to three people at most. A few days after the leak, Lee & Kim hastily published a paper with four more authors but not Kwon.
A former employee was the first to publish a paper with only three authors. I am surprised you haven't heard about this.
Then a few hours later a paper was published with six authors (without one of the original three authors), ostensibly by the core team that has been working on this.
There should be much more detail on the Internet if you search for it.
Notably, the missing author from the second paper is the rogue employee who published the first paper. The order of the first two authors is the same for both papers.
earlier in the week, someone posted that the person who posted the paper early is someone who wanted to fund the project, but the researchers didn't want to include him. So he took as much data as he could collect and posted it publicly with the addition of his name. He is not a researcher or scientist for the team.
It's one of the reasons people were thinking this might not be a fraud.
Nobel prizes are capped to 3 people, and the speculation is that the rogue person published early to try and ensure they were one of the people that received it. No reason for infighting like that if you know it's a hoax.
No, they are not. The only consequence of that action was a worldwide mess of people trying to reproduce the experiment or disavow it. Original authors needed a bit more time to polish their discovery, write solid paper with undeniable proofs and maybe even prepare handout samples.
Young-Wan Kwon's legacy will undoubtedly be that of the hero of LK99. There is a lingering thought that had this discovery been made in the US, it might have been hidden indefinitely by the military-industrial complex, never to see the light of day.
Young researchers in China often face intense competition and pressure. While they are generally well-funded in the short-term, even more so than their counterparts in the US or Europe, the lack of long-term career security can be challenging. They must continuously chase after every potential scientific breakthrough, like LK-99, not just out of passion or curiosity, but as a necessary step for survival in their career.
Furthermore, the system in China offers many awards, grants, and titles that are tied to age. These are not just for prestige but are critical for progressing in their career. This situation adds another layer of urgency and competition among young researchers.
> Why are't more labs outside China making LK-99 and publish videos?
some possibilities:
- they have not been able to conclusively replicate anything and don't want to publish a negative result for fear of someone else publishing a conclusive positive result later.
- they are more careful to publish something that they are not (yet?) 100% sure about
- they don't care so much about the whole 'science in the spotlight' thing and prefer to go the traditional route of publishing after peer review of one or more papers rather than to make YT videos and having to fend off a barrage of interaction
Yes, good points though the money angle is debatable.
The USA and Europe are 'on average' also more pro-science than the rest of the world, but I think the East has the edge in education and comes across as more focused on progress. Probably this is underpinned in part because they have a ton of very hard problems that need solving and in the 'rich West' people are much less driven because their lives and the lives of their families are on average already quite plushy.
It makes you wonder what could happen in Africa and Latin America once they embrace education and science.
I didn’t want to get into it too much but I wrote that Chinese people are on average, more pro-science because they’re less religious or that their religion does not strongly contradict with science.
I’m guessing your parents’ HK church friends are Christians?
Yeah I'm not disagreeing with you at all, it's the Christianity and Hong Kong's post-colonial legacy.
The mainland Chinese social order is secular. The technocratic state, the CCP, had a lot to do with that. I would actually argue that technocracy is a superficial form of scientific culture.
Because all of these replications aren't contributing much; they're bare bones efforts with little to no scientific insight. Not one of these papers is conclusive in terms of showing evidence for it being a superconductor. None of them even contribute meaningfully.
This happens in machine learning all the time. Low quality papers rush in after every major release and announcement in order to be first. But in the long term they're meaningless because it takes time to do a good job.
Good labs don't want to announce half done maybe results. They want to announce conclusive comprehensive high quality results they can stand behind. That's what moves science forward.
Plenty of labs are working on lk-99, but they won't publish this sort of half assed analysis.
These studies help enthuse people though. I welcome them. Not for science but potential. Tells me that something is there, rest is on experts as you said.
A lot of commenters here are from the West, and the atmosphere between the PRC and the West (particularly the US) at the moment is quite a bit more acrimonious and competitive than it has been historically. It makes sense that people would leap first to essentially "They aren't as smart as us"/"They're less honest than us"/"They're less trustworthy than us" etc. The simplest and most likely correct explanation is that the PRC has a lot more STEM students in absolute terms and even proportionately, more and very well funded labs with very ambitious leadership and "national spirit"/"something to prove" style thinking, and that particularly in chemistry/materials science the PRC is dominant, with a huge proportion of all the highest quality chemistry focused universities, academies etc.
I, too, found it fascinating how every initial reply dodged giving the Chinese credit. Instead, all of them tried every way to paint their publications in a negative light.
Surely, most tech workers have encountered working with highly competent technical coworkers from China? Or that Chinese students in America tend to perform well above average academically?
Why should anyone be surprised that China performs exceedingly well in sciences?
But they have also encountered IP theft and Chinese nationals assaulting local and foreign nationals in countries they are lucky and fortunate to have the right to study in, over protests: https://www.scmp.com/news/asia/australasia/article/3019888/h...
Now I hate that I have to say this (because it should be a given), but _obviously_ this does not apply across the board, but when people experience shocking behaviour like this you can see why they might hold grudges or biases, even when that's wrong to do.
That aside, China & the US (of which are are a lot of Americans on this site) seem to have held a grudge for quite a long time now, on both sides. Which is a shame, because we're all human at the end of the day and especially science should recognise that stupid tribal human concepts like nationalities and borders are meaningless to the big picture.
> Why are't more labs outside China making LK-99 and publish videos?
Good rigorous science takes time to produce. It can take anywhere between several months to a year or more, and the career implications for rushing something out that is later found lacking is not great.
> the career implications for rushing something out that is later found lacking is not great.
On a tangent, this idea of reputation keeps on coming up in this whole discussion and I am burdened by it in a way I don't fully understand. The way people have talked, if this LK-99 doesn't work out, then it is almost as if those who published this did something _morally_ wrong. Well, morally wrong is not quite true, but the way people talk about it tanking their reputation it feels like such a strong statement. Is there some way we can focus on the science and not get bogged down in the very human reputational part of this whole thing? It's almost as if a good chunk of the scientific community don't care about the benefits the science brings but the reputational benefits.
A scientist's career depends on their reputation. For them to have the best opportunities, they need everyone to have the highest respect for the quality of their work. If they damage their reputation, it could ruin everything they've spent decades working toward.
Of course they care about science itself, but there's a limit to what risks they'll be willing to take when it affects them personally.
For people with a relatively low reputation (or no reputation, i.e. unknown), taking a risk is not a bad move. They have less opportunity, and there's a chance the risk might pay off and boost their reputation.
For people whose reputation is already good, the risk is less worth it. They don't stand to gain as much, and they could lose a lot. So they're less likely to do it.
It's a direct side effect of reputation and funding being closely correlated: if your reputation is that you put out stuff that doesn't work you won't get funded. This is dumb, but that's how the world works. That's why you almost always see the 'more research is needed' line in various papers, it is most helpful when seeking for funding that one paper will lead to another. But (unfortunately) negative results aren't nearly as often published, and that is because they will not get cited as much in follow on papers. It's all the result of metrics based meta analysis of papers, aka the 'impact factor' (which, no kidding is a copyrighted term), once that got established that became the thing that science partially optimized for.
During the 'golden age' of science, the time of the Royal Society the fields weren't specialized at all and the publication mechanism was scientists sending each other interesting stuff by post. At that time there was no meta analysis at all and there was so much low hanging fruit that the 'gentleman scientist' could make big breakthroughs in their home laboratories. But as that low hanging fruit decreased the educational paths required before being able to do meaningful science became longer and longer, then specialization set in and the costs of doing science went up. That's how we arrived at grants used to fund science.
> That's how we arrived at grants used to fund science.
A lot of these gentleman scientists were independently wealthy aristocrats that didn't need hand-outs. The fact that we don't to a meaningful extent have that sort of leisure class anymore is arguably a much bigger reason we need grant funded science these days.
It could be argued there a bit of a replication of the pattern in the space race between Musk and Bezos, but they're missing the sort of well education the aristocrats of yore would have had[1]. They employ a lot of people to do the actual dirty work, but that's not really a big difference from back then either.
> It's a direct side effect of reputation and funding being closely correlated
This sheds some light on it to me. I guess what partly surprises me is that people seem to care more about reputation than just a means for improving the signal to noise ratio in papers or as a estimate on what will give you your biggest bang for your buck.
The other issue I see come up is the idea that if there is no signal to noise filter, then a scientist might "waste their time," either reading the paper or trying to replicate. But to me, it sounds a little bit like trying to avoid actually doing science. And peer reviewed papers don't imply excellent quality either. You should evaluate papers on their merits. It is your job, as a scientist, to evaluate the most productive approaches based on the merits of the science being done, not based on reputation.
Working in science is different from working in other fields in that you work with things that are not well known, where a lot is unclear and your job is to move information out of this murky regime out into the light.
This means it's really easy to just claim something, that will be really hard for others to verify.
And wrong claims are incredibly common. It's easy to delude yourself through all sorts of biases or good old sloppy work.
That's why, when scientists talk to each other, they need to know that the other person is a serious scientist and won't pollute their mind with nonsense.
If you develop a reputation for making baseless claims, people will stop including your claims in their own thoughts.
This is an good point. I think there is a difficult balance to strike between open communication and adding confusing noise to the scientific literature. Partly for historical reasons, there is an expectation that published science is correct to the best knowledge of the authors. Since writing, publishing, and reading papers takes a lot of time and effort, there are advantages to this precedent. I work in physics, and I can tell you that if we published all of our half-baked and often wrong ideas, we would waste a lot of people's time, at worst sending people down blind allies that we would soon rule our ourselves. I suppose tying reputation damage to publishing incorrect or misleading results is then part of the incentive structure that keeps publication quality high. At the extreme, there's was one recent LK-99 paper that had an obvious glitch in their data, and instead of taking a bit more time to debug it, they just posted the paper and speculated about what was going on. If that's how much you're rushing, how do I know I can trust your data?
But there are costs to this. There are big gaps between what people discuss with colleagues and what gets published, and the is no forum to publish partial or negative results, except maybe conferences. Ideally published papers stay at a very high bar, but there are other forums to publicly share work in progress. In a way Twitter is becoming this.
>Good rigorous science takes time to produce. It can take anywhere between several months to a year or more, and the career implications for rushing something out that is later found lacking is not great.
By my count, 18/20 top universities for chemistry research is in China. The first US university in chemistry is MIT at 23.
One of the attempts is by USTC, the second best university in the world for chemistry research according to the Nature link.
China's lead in chemistry research is also translating directly to real world applications. For example, CATL and BYD combined own more than 50% of the car battery market. Six of the top 10 car battery makers are Chinese companies. [0]
It's not surprising that most of the first replication attempts are from China.
I think for a lot of people this whole saga is probably the first time that they realize that a ton of original work is done in Asia, rather than that it is just our manufacturing hub. They have to adjust their mental model to account for a view of Asia that is in important ways outstripping the West in terms of resources and combined brain power. The amount of scientific output in Asia is astounding, and the number of active scientists dwarfs the numbers in the West. If they would switch to stop publishing in English it would be quite amusing.
Lmao, all that implied was that rushed papers don't have time for in depth analysis, only reproducing the material without further insight. Which is absolutely true; we're not super-human, it takes time to replicate the material and then time to disseminate why/how it works.
You inferred logic that was never present in my reply. It's a correlation/causation error. As it is fully possible for things to happen in China that are not caused by the specific character and nature of Chinese people; pointing out that something has happened in China is not an implication that it's caused some the special nature of Chinese people or Chinese society.
If you go out of your way to look for uncharitable ways of interpreting what others are saying, you will find them.
Instead of going on the offensive and taking such an uncharitable interpretation as a given, if you truly can find no charitable interpretation[1], maybe ask for a clarification rather than jumping to conclusions about unspoken implications.
While it's true that good science often takes time, I believe it's not necessarily the whole picture. In fast-paced and rapidly evolving fields like this one, swift and open sharing of progress can be incredibly valuable. This is evident from the recent developments in AI and large language models (LLMs), where real-time collaboration and data sharing have led to exponential advancements.
I guess it's a mix of
- It's a good opportunity to gain visibility by working something grabbing the news. 1st room temperature superconductor, that's big
- (Under)graduate lab interns can be thrown onto any crash projects at will without much repercussion or resistance, boss is boss and should not be subject to questioning
I did a post-doc in China, so that's my sample size N=1 piece of cheap opinion
I imagine over the next few weeks there'll be an explosion of efforts to replicate if it's truly that straightforward to produce for reasonably-equipped labs.
> Why are't more labs outside China making LK-99 and publish videos?
Red phosphorus, one of the ingredients in the synthesis, is a controlled substance in the US. Might be delaying everyone while they fill out the paperwork with whoever their supplier is.
Imagine a game of rolling a collection of n dice (normal 6 sided), where the player wins if all N dice are 4 or higher (probability 1/2 for a single dice).
Then the probability of a lucky roll is P = (½)ⁿ
So the smaller the collection of dice the more likely a lucky roll becomes.
Consider a hypothetical continuous production method of LK-99, where the fraction of wire in superconducting arrangement is a function of its thickness, more likely if thinner.
Could one simply re-anneal (and possibly re-quench) a short non-superconducting section until we get lucky, then proceed to the next non-superconducting section?
This is a very entertaining read, and highly recommended.
He produced extremely fine threads of glass, quartz, etc. finer than the visible light diffraction limit. (Which he would use to construct sensitive torsion balances)
He was inspired by Peles Hair. (Volcanoes grow hair too...). He first describes electro-spinning and its limitations (uncontrolled growth of hair which mattes together).
How did he do it? He used a miniature crossbow, modified so he could trigger it by foot pedal, leaving his hands free to work.
He would first produce a small thin section by more conventional means, glue one end to a fine dart or arrow (a piece of straw really).
In the next sentence "blowpipe" is not a launching device, but a device to heat a small sample to a high temperature.
Then he would use a blowpipe and melt the piece of say quartz until a bead forms, at which point he would trigger the cross bow.
As the arrow of straw shot away it draws the bead of melt to a long fine thread.
Perhaps this can be modernized to vacuum or inert atmosphere, melt the quasi LK-99 sample until it beads, then shoot away. Rewind the resulting thread and re-anneal any non-superconductive segments.
_Perhaps this can be modernized to vacuum or inert atmosphere, melt the quasi LK-99 sample until it beads, then shoot away. Rewind the resulting thread and re-anneal any non-superconductive segments._
>we successfully verify and synthesize the LK-99 crystals which can be magnetically levitated with larger levitated angle than Sukbae Lee's sample at room temperature. It is expected to realize the true potential of room temperature, non-contact superconducting magnetic levitation in near future.
Aaaand we’re back!
I’m really trying to remain (reasonably, not ideologically) skeptical but if this is legit this is a huge step towards confirmation.
At this point, even if this doesn’t turn out to be the holy grail this seems like it would still be a very big step/promising avenue for research on the path right?
IANA physicist, so grain of salt etc. but yes, I agree with your assessment - even if not as impressive as we're all hoping, at the least we have a very promising new vein of superconductor research to mine (as it were).
In other words: even the downside here is great, and the upside is...
It still hasn't been a week. Giving a bunch of these teams and one's with no announcements yet an additional year will hopefully lead to some very exciting findings
1D just means it conducts in one direction. Like it's got tiny wires in it.
It's not particularly unusual; anisotropic conductive adhesive is used in your cellphone to glue the screen on. They tape over the electrical pads, then line up the corresponding LCD pads. The tape allows current to flow vertically between the pads, but for adjacent pads it's insulating.
Neodymium magnets are another example. They're made up of a little honeycomb, and inside the cells of the comb are very long, needles of neodymium a single atom thick. They create a magnetic field in a single direction.
2D in materials science means 1 dimension is nano-sized, typically referring to sheets of material that are 1-2 atoms thick. 1D means 2 dimensions are nano-sized.
I'm not hip on superconductor science and haven't heard of 1D/2D being used to describe conducting in 1 or 2 directions, do you have any further reading on that?
Assuming this is indeed the holy grail of superconductivity... can someone with knowledge on manufacturability of these materials, suggest a prediction of when the specific LK-99 material would be ready for mass production? Is it ~3 years from now or more like ~15 years from now?
I'm not what you're asking for, but almost certainly closer to 3 than 15. If it's the real deal, I imagine we have industrial capacity to make a superconducting material within 5 years. The required supplies are readily available, and while production of this particular material seems difficult, there's like a whole new field of materials to find, with very strong incentives. China would love to lead the world in a new branch of material science.
All of the "it floats" videos are unattributed leaks or come from less-than-reputable sources.
The reputable sources only ever show videos of the sample touching the magnet.
From what I'm reading, several different types of materials can angle themselves like this from a magnet, but only Type II superconductors will float above a single monolithic magnet.
Until we see a confirmed video from a reputable source of a visible gap between the sample and the magnet, it's not confirmed that LK-99 is superconducting.
The problem is that no one has a pure sample of LK-99, only chunks of rock containing small areas of LK-99. So even if LK-99 is a true RT superconductor, we're only going to see levitation on the side of the chunk that has a high enough concentration of it. It's like trying to evaluate a diamond by poking at a rock that might or might not have any diamonds buried inside. It must be really frustrating.
So yeah, I hope there's a better way to evaluate this substance than "it floats!"
Conventional conductors have resistance, so when you run electrical current through them they heat up; this is why laptops and desktops have fans to cool the chips, why phones get hot and so on. Superconductors transmit energy without resistance, so the heating problem goes away and the energy costs of running the device go way, way down. Basically you can pump more through smaller wires without worrying about them melting. This means that just about any electronic/electrical device could be run more cheaply, safely, and at a smaller scale.
Advanced tech like MRI machines, maglev trains, and quantum computers all use superconductivity now, but are enormously bulky and expensive because they require extreme cooling using liquid helium (which is in short supply). Room temperature super conductors can dispense with all that, so instead of a quantum computer being the size and power draw of a refrigerator (because it is in fact mostly refrigerator) it could go in your wristwatch.
Superconductors also expel magnetic fields, which in practical terms means they repel magnets. And they only repel, without being attracted to magnets at all, like iron or the poles of other magnets. So you can use them for levitation. And because superconductors have zero resistance, if you put energy into a superconducting coil it stays there forever, just circling round and round the coil.
This LK-99 material people are talking about is an alloy of lead and copper, and it's not that difficult or expensive to make. The raw materials are fairly cheap, and the production involves heating it to hundreds of degrees centigrade for 24-48 hours, which is very easy to do in a lab and probably easy to do at industrial scale. Scientists don't understand the material very well yet, but if these discoveries are validated (as appears to be happening right now), then refining the manufacturing process is going to happen quite quickly because the payoffs and economic demand will be enormous.
People are comparing this to the invention of the transistor; I think a better comparison is the electrical lightbulb. It's going to change things massively, because any country will be able to manufacture this. You could manufacture this stuff at home, the equipment you need fits on a desk and costs only a few thousand $.
There's no general analytical method to solve for a material with specific properties, including superconductivity. The properties of materials are governed by the quantum mechanical behavior of electrons and their interactions with the crystal lattice. These interactions are typically described by the laws of solid-state physics, which involve solving complex quantum mechanical equations for the electrons in a crystalline structure.
For simple systems, such as some low-temperature superconductors and idealized models, researchers can sometimes make analytical approximations or derive simplified equations that describe the behavior of the material. But for most materials, the equations are too complex to solve analytically, and researchers rely on numerical methods, computational simulations, and empirical data to understand and predict material properties.
It's similar to the reason we can't find easy analytical solutions to other complex systems (like models trained via Machine Learning), there are just too many complex factors and interactions to take into account.
Same reason a photograph of a delicious meal (the theorized characteristics of the material you want) doesn't give much idea of how to make it, even if you have a stove and live next to a supermarket with all the ingredients.
The cost of the cooling subsystem and whatever workarounds you need to do to deal with heat buildup, plus about 10-20% in resistivity iirc. Not a pro engineer, but have some experience with building electronics and embedded systems.
Everything that involves powerful magnets and benefits from making things float - like bearings.
Robots and exoskeletons come to mind.
One of the more bonkers applications would be to wrap Mars around the equator with it, creating an artificial magnetosphere.
Temperatures on Mars are pretty low, but during the Martian summer they get to a nice 20°C there, so currently available superconductors are not up to the task.
MRI machines will get much better/cheaper. They use superconductors already, but are very hard and expensive to keep at their super-cold operating temperature.
It’s a breath of fresh air reading about actual revolutionary science happening.
The deluge of news about non-replicable results, fabricated data, overhyped press releases from both academia and industry had become really depressing. For once after a long time it’s the real deal.
Even if this is not a RT superconductor, it’s now evident that the original authors didn’t cheat and are not crackpots as initially suspected by most.
Agreed. It's felt like so many of the technological marvels have either happened quietly (e.g. the reductions in chip size) or in the past (e.g. all the physics breakthroughs of the 1900s). Like ChatGPT, this seems to have the characteristics of being both accessible and completely novel.
What is interesting is that assuming it all pans out these are all in entirely unrelated fields. That sets the stage for a whole raft of follow on inventions. Similar to how the telephone + basic electromagnetism led to radio, tv, radar, transistors and ultimately to computers small enough to be practical for businesses, the internet, cell phones and so on.
My grandmother was born in a house without running water, no sanitation and no phone. That's just a bit over a century ago. The rate of change on an annual basis isn't all that large, a decade and you'll see big changes, our world can't be compared at all to 120 years ago in terms of luxury, communications, personal energy budget, food, travel options etc. Still, there are large areas of the planet where the last 120 didn't bring any progress and there are those where they actually went backwards, not rarely to our (the western world for me) benefit.
The last 20 are already mindblowing to me, to the point that I am a bit overwhelmed. Just reading to keep up with all of the tech developments is basically impossible today.
Yeah, recently I've remembered my computer from 1995 had about 1GB HDD. Now I can much more cheaply buy a SSD which will transfer 7GB/s. Mind boggling.
I paid 2500 $ for a 500 MB harddrive and $750 for 64 K RAM at some point. To me the world we live in is utter science fiction, and yet, every morning I seem to wake up and it is all still there.
In terms of basic tech, other than CRISPR what are you thinking of? Smartphones + internet are important, but are more combinatorial. Early quantum computers are cool, but niche (and even if perfected, seem to have application for breaking PKI). Reusable rockets are great, but not fusion. The standard model is locked in, at least up to LHC energies modulo sensationalist PR.
GPUs have been huge, unlocking so much more compute than was possible with CPUs. Most of the burgeoning AI industry is thanks to that cost decline, which is only getting started.
Every field of science has had breakthroughs in the last decade or two. Even superconductors (check out 'H2S', though it isn't without problems), medicine, genetics, quantum computers I'm not yet all that impressed with but they're too early stage to be really judged, the energy transition is really happening (sub-subject: the price of solar and wind power), semiconductors (every time we think it's over...), GPS (check out how it works under the hood), so much in materials science that I can't begin to keep up, phased arrays, lidar, the insane increase in computational power that you can stick under your desk, battery tech (one more breakthrough there and we will see all kinds of effects on other fields as well), solid state lasers the size of grains of sand, fiber optic transmission rates, the James Webb (ok, 'old tech' by now, but that's one impressive thing they pulled off there, especially the delta between the hot and the cold side of it) and on and on.
As for the future:
Fusion would be huge, but I'm not all that hopeful for cost efficiency there once you get to net power out but I'm not going to talk down the people that are doing the work and the research. And yes, the basic physics seems to be pretty stagnant, we're really waiting for a unification of the two major fields there but even if we do get that unification it may not lead to new practical tech, it could simply nail things down once and for all without moving the needle in terms of costs, speed or new materials science. It may have some implications for various computer models used in those fields and it probably would have impact on astronomy.
Im posting this reply from my mainstream smartphone: a pocket supercomputer with a 4K 60Hz HDR video camera. I’m chock full of designer vaccines and can watch a catalog of almost all the movies ever made on demand tonight. I listened to an podcast in my EV on the way to work. And SpaceX probably launched 60 satellites this week on the 14th flight of a rocket that flew home afterwards.
Things have been moving along.
edit: comments are quibbling about the fraction of all movies ever I can see on a paid streaming service. This is like complaining about that the meals and elbow room are not wonderful on a $400 flight from LA to London. It's a goddam miracle and you're still grumbling. Please tell me now how your $12/mo Spotify account doesn't have the June 1972 Grateful Dead New Jersey show your mom was at, and is thus near worthless. I spent $12 in 1980s money on single album in my youth.
Let me amend my grossly hyperbolic statement and say I could stream on demand more movies than I could ever watch even if I did nothing else the rest of my life, including many but not all of the good ones. Now the statement is strictly true, but did this make my contribution better?
Important correction… the catalog of movies is likely significantly smaller than all the movies ever made… the ongoing removal of new movies for business reasons and the market fragmentation mean that reduced profits are leading to corner cutting with respect to the size of stream catalogs, licenses cost money… pirating via torrents while larger also has diminishing returns as you step outside the mainstream movies, unless your lucky and find someone diligently sharing bandwidth to keep content online happens to share an interest in that niche… we’re rapidly losing access to an entire generation of analog only media as people throw source material into the trash and corporate stewardship of original recordings slowly fails one papercut at a time for the decades it will take for work to enter the public domain and be properly archived by the people trying to hold onto all of this.
We could have maglev trains and superconducting supercomputers and copyright will still be deleting our culture to “save money” for corporate copyright owners to increase their profit margins.
I pay a commercial service for pirated content and have access to over 50,000 movies alone. Then there are tens of thousands of tv shows. I can request shows. It gives me access to all major streaming services content. Anything I have ever searched for is there. Shows from around the world and all around feels like unlimited content. Cheaper then Netflix and I definitely feel I am getting great returns for my money. One day we will look up to pirates for saving our culture.
It took a few minutes sleuthing to validate that my gut reaction of "but there's way more than 50000 movies" was right ... But thanks to UNESCO statistics (http://data.uis.unesco.org/) I've got the sad truth.
Your 50,000 movies is a lot but its not even half of the movies made since 1995 that UNESCO were able to reliably cite data for. I can't deep link to the exact spreadsheet of data but it's under the culture data section and its feature film statistics. The total number of feature films (which will exclude some things like short films and other stuff based on various data processing considerations) produced around the world between 1995 and 2017 is at least (because there are likely more movies shot than produced) 107,432.
Assuming your "over 50,000" is the usually marketing line and being generous and assuming its somewhere between 50,000 and 55,000 movies, then if true that's approximately half the "feature films" produced between 1995 and 2017...
A a sensible lower bound extrapolation based on the data is in the range of half a million feature films (again recognising this data is likely excluding things we would on average collectively call a movie), making your service closer to 10%... or possibly even lower...
Over 50,000 movies feels like a lot until you dig into how much media we make as a species. That's just feature films, likely only ones with a theatrical release (I don't think I can reliably translate a lot of the source documents even if I wanted to validate the source data criteria myself, hence I'm using the weasel word "likely"), meaning its excluding a LOT of film/movies, and its only for 1995 to 2017, covering an era where the "amateur film" scene was rapidly exploding due to falling costs of the technology behind recording, editing, and distribution a "movie"...
None of this is to tear down the effort of hard working preservationists... both legal and illegal. I agree with the archivists I've had conversations with, whose collective opinion can be summed up quite simply. "Any copy is better than no copy."
It is a plex share I don’t really want to draw attention to the company sharing so don’t want to share the site directly but just go to reddit and search for plex share and start looking. Lots of different ones some paid some free. Some tailored to anime some tailored to other genre.
The last 10 (20?) years I noticed a notable shift in public consciousness about scientific progress and that we as a species are moving in an upward trajectory towards a brighter feature.
One of my favorite movies of all time is “Contact” mainly because of how damn _hopeful_ it portrays humanity. I want to live in that world, not the gloom and doom “we’re ruining everything around us” that is so often shown in recent media.
And the whole thing is perplexing to me because as you’ve put it, we _are_ living in the future! There is soo many great things happening around us, but people seem not to notice - not just the technological things you’ve mentioned - but societal too.
Like because of recent developments in economy theory, the world altering global shut down due to corona did not end with a great depression like event for the whole world, and that alone to me is simply a miracle.
Human longevity studies have now reproducibly able to reverse aging _in primates_ - human trials starting this year!
Feminism has become all but mainstream, unlocking like 50% more of human ingenuity.
Urbanism studies have finally popularized environments where people can live happily and sustainably their entire lives, while there have always existed places in the world that are “nice” to live in one way or another, we’re kinda getting the science down why, and starting to popularize it a little (strongtowns).
Its just a crazy good time to be alive, there are countless problems all around us but they are actually getting solved at least somewhere, instead of banging our collective heads against the wall, one can just look for how it has been successful handled somewhere else and try to replicate!
You are just repeating the same incredibly boring 'Tesla fanboy' things that Joe Rogan likes to say every now and then to create the illusion he's smart, up to date and edgy, which is the exact reason you don't sound smart at all.
I feel we will eventually outpace our own means of destruction. The next big step is getting self-sustaining colonies off Earth, which will do wonders for our ability to survive as a species. The biggest threat is perhaps nuclear weapons and climate change, in my opinion.
I actually feel like the biggest threat we face is a collapse of world order and another world war, even conventional coupled with a degradation of humanism. These lead to a global decay of basically everything, including our ability to outpace our own means of destruction and an entire inability to address climate change. The global environment becoming increasing toxic to our age’s life, including humans is I think ultimately the end of our line.
>can watch a catalog of almost all the movies ever made on demand tonight.
You must have one hell of a different streaming services experience from that of most people today if you can pull this off without engaging in a fair bit of torrenting and general piracy.
No you just decide what movie you want to watch and pay $4 to rent it on demand. Very common experience available to anyone with a smart TV or connected streaming device.
Yeah, regular consumer streaming might cover "movies still popular enough but not so new that they're only in theaters." However "all the movies ever made" is a significantly higher bar.
For me from how the saga of 1st paper and 2nd paper started I had a feeling that something really unique has been found and the drama and politics is because of trying to claim the glory of it not because it was fake.
You know, most of that stuff you hear about are bad takes on science. Science has moved into the realm of trying to understand complex systems with a lot of noise and with unknown variables. There will be a lot of misses; but that is how sausage gets made.
Exactly, it's like sifting a mountain of ore looking for diamonds. The original Korean team has essentially been doing a slightly guided brute force search in a domain that they thought was promising doing 100's or even 1000's of experiments. It's a clever approach: you have a real problem because the parameter space is so large but if you get 'close enough' to pick up an anomaly that might help you home in on something that is really promising. I'd love to see their notebooks and everything they tried. This is pretty much how we first got electric light: endless failures and then one success.
Well steady on there. The problem people have with bad science is not that there are misses, it's that there's no way for them to reliable decide when they've had a hit and so they end up inventing fraudulent pseudo-science. What's happening with LK-99 is science as people tend to imagine it working. Hypothesis -> experimentation -> replication -> theory.
But that sort of science seems to be quite rare. Most science people are exposed in their daily lives is of the Francesca Gino variety, and that's not merely "a lot of noise with unknown variables", that's "nobody will care if we don't do real science so let's not bother".
For me it's also interesting how we closed the gap between scientific discoveries and the general public to know about them.
With the advent of pre-print servers, you can read this basically "next day".
I see this as a double-edged sword, at best. There is a reason why papers used to be published only after they were peer-reviewed. Now the media are starting hype-cycles on unreviewed research results, just to get the ad impressions before anyone else does. But I'm not sure if they are as diligent about following up on these stories and making later negative result front-page news as well.
So what the heck has happened with LK-99 really? (Disclaimer: I'm no physicist nor chemist, but I have co-written a report on three LK-99 papers [1] and am tracking the Twitter discussion as much as I can. I also got some help from knowledgable friends---much thanks for proof-reading.)
It turned out that LK folks were not talking about some stupid shit. Specifically they were one of the last believers of long-forgotten Russian theory of superconductivity, pioneered by Nikolay Bogolyubov. The accepted theory is entirely based on Cooper pairs, but this theory suggests that a sufficient constraint on electrons may allow superconductivity without actual Cooper pairs. This requires carefully positioned point defects in the crystalline structure, which contemporary scientists consider unlikely and such mode of SC was never formally categorized unlike type-I and type-II SC. Professor Tong-seek Chair (최동식) represented a regret about this status quo (in 90s, but still applies today) that this theory was largely forgotten without the proper assessment after the fall of USSR. It was also a very interesting twist that Iris Alexandria, "that Russian catgirl chemist", had an advisor who was a physicist-cum-biochemist studied this theory and as a result were so familiar with the theory that they were able to tell if replications follow the theoretical prediction.
Fast forward to today, students of the late Chair continued the research and produced a possible superconducting substance---LK-99---based on the Russian theory. A lot can be said about papers themselves, but it should be first noted that this substance is not a strict superconductor in the current theory. Prof. Chair once suggested that we need to trade off some (less desirable) properties of superconductors for room-temperature superconductivity, and that property seems to be isotropy. This particularly weakens the Meissner effect criterion due to the much reduced Eddy current, so there is a possibility that LK-99, even when it's real, might not be accepted as a superconductor in the traditional sense. LK folks on the other hand think they should be also considered a superconductor, but they are probably already aware of this possibility.
If we allow anisotropy in this discussion, we do have lots of such things already, most importantly carbon nanotubes. Scientists even thought about the possibility that they may function as typical superconductors [2], without any success though. So it might be appropriate to say that LK-99 is a substance that mimics them in one direction, but much more malleable. And that is an actually significant result (if true, of course) because for most uses a strict type-I superconductor is far more than sufficient, while implications of superconductivity are more achievable. We so far looked for strict superconductors only because we didn't know the effective way to trigger superconductivity otherwise; LK-99 might change that situation.
This whole discourse should make you more careful to conclude whether LK-99 is a superconductor or not, because we may well end up with a revised definition of SC as a result. If LK-99 makes superconductivity much easier to trigger it should be considered a superconductor in the macroscopic sense, authors would argue. Only the time will tell if they indeed made such a substance and it would be malleable enough to be substitutes for other superconductors, but they have a long history and arguably received unfair treatments. And they are about to fight back.
It make sence to not map all soviet people to russians after Russia started invasion and killed 150.000 civilians in Mariupol only and people in Russia support that. Many of them have Ukrainian family names and nationalities. Also many people in Ukraine resist invasion with Russian family names and nationalities. If Bogolyubov was alive then he could decide what side to support. So calling him soviet is correct way to go.
What practical use case could be for that new material? Can we make a battery with it or something practical?
Indeed the original draft said USSR, but we are talking about Russian theories, not USSR scientists who are ethnically Ukrainian. We came to realize that we have so little information about the exact theory and our best knowledge suggests that the theory is, while may or may not have been pioneered by a Ukrainian scientist (see the updated post), currently rooted in Russia and not in the former USSR.
> Nikolay Bogolyubov was born on 21 August 1909 in Nizhny Novgorod, Russian Empire to Russian Orthodox Church priest and seminary teacher of theology, psychology and philosophy Nikolay Mikhaylovich Bogolyubov, and Olga Nikolayevna Bogolyubova, a teacher of music
It's not a "tendency", it's truth. All people you mention are Russians because they were living in USSR, speaking Russian language, worked in Russia and for them Ukraine was an administrative region.
Not to mention the fact that the country that denies Soviet legacy and achievement does not have moral right to attribute anything that has been done during the period to people and regime which despises their own past, legacy, sacrifices, relatives, and denies everything that has been done.
So ridiculous.
This post has now accumulated some substantial updates, which should not change the conclusion but fix possible misreadings and inaccuracies. Please refer to the HackMD post for the latest version.
"Chair" is a preferred transliteration for this particular person. It is a very uncommon ("Choi" is the single preferred name), but still plausible one.
"LK-99 Updates around the Korean Verification Committee:
- Sample will not be released today/tomorrow
- Group waiting on peer review (implied to be APL materials) and could take 2-4 weeks
- Sample possibly with APL Materials, which is why it cannot be provided to verification committee
- Team is asking the Verification committee for a detailed plan on how the committee intends to perform the verification before proceeding
Now, Hyun-Tak Kim also issued the following quote in regards to the Korean Verification Committee: '돈을 빌려서 어렵게 사업하는 분들한테 와서 조직적으로 횡포를 부리는 것은 바람직하지 못하다"며 이같이 밝혔다'."
On a completely different front, let's say the paper completely checks out and they have discovered this miracle. How strong are the patent protections on materials science?
For example, if someone could replicate the effect with lead + gold, would that be considered a novel material which would not be subject to licensing? Is it the material itself or the method of production?
I'm sure it depends on jurisdiction, but in the US, you can't patent a material, only a method to make it.
If I recall correctly, their patent for method covered a wide range of constituent elements, but left off gold. I would feel pretty bad for them if they genuinely discovered an RTP superconductor but that omission prevents them from becoming billionaires.
But more likely the issue is that their current method has lots of room for improvement and someone else finds one that is substantially better.
ETA: apparently wrong, can patent composition of very novel materials.
You can get a composition of matter patent (https://en.wikipedia.org/wiki/Composition_of_matter) for something shockingly novel, and this might be. It's super hard to find anything that would qualify, but I've been awarded them before for materials science research.
You can write stuff down generally enough that it's hard to make small changes and get around it. I did a lot of "1-10%" stuff in the claims.
Oh, that's weird. They missed all the other noble metals besides silver too. No platinum, rhodium... those things have really really interesting orbital structures, I'm surprised they're worth leaving out of something so tricky when it might be important. Strange.
The only thing I can think of is that they did it and know that those noble metals don't work very well, and so they're getting everyone else to follow a wild goose chase down a very expensive rabbit hole while they already have a better approach.
... but the tech doesn't look that developed. Very strange.
It could be that they found something incredible by chance and they’re at the limits of their personal capability to further understand and refine. The fact that they’ve known about this for over 20 years suggests maybe. It’s not a bad thing if they are, they’ve already taken one of the biggest leaps. New teams with fresh eyes and varied backgrounds will look at the problem space and undoubtedly see room for refinement.
Agreed, that's what makes the omission odd. Normally you'd expect them to list any candidate that they have any possible reason to think might be relevant, especially if they aren't entirely sure what is going on.
Is that clearer what I mean? I think the only reason to exclude those things is if they were super confident they weren't a good idea for some reason we don't know. Or they just... forgot? That would just be very surprising.
I'm no patent lawyer, but there is literally a US patent-office category covering "material" for exactly this kind of invention. Section 505 - "Superconductor Technology: Apparatus, Material, Process".
> This is the generic class for subject matter involving (a) superconductor technology above 30 K and (b) Art collections involving superconductor technology. Apparatus, devices, materials, and processes involving such technology are included herein.
It means "art" in the broad sense of "prior art" (e.g. a creative endeavor), not art in the sense of MoMA or the Louvre. Though both kinds of art are beautiful in their way.
I have been SWE for 30 years. My youngest is off to college and I immediately enrolled in a masters program, apparently because I want to understand QFT which no amount of my own reading without doing much homework has enabled.
Maybe success in life and science isn't about becoming a billionaire?
Even so - if this works out, their prizes and paid speaking gigs will cover a very comfortable life if that's what they want. I'm not sure why they should be entitled to more than that.
Hey Jacques, It's been fun watching your comments on this, you're very knowledgable. :) I had a question, I see some "we made a totally perfect/pure LK-99 it didn't work" - is pure/perfect what they should be going for, to me perfect/pure != correct, however I knew literally nothing about this subject till this week so I have no idea what I'm talking about. Thank you.
I really couldn't tell you what they should be doing, but I'd love for the original samples to be tested by another lab. That seems to me the easiest way to verify the original claims and reduces the uncertainty introduced by the lack of good process documentation and the chance that even the original researches do not quite know how they did what they did, assuming it is all true. The fact that that hasn't happened yet is the biggest source of my continued skepticism, at the same time my optimism is powered by the partial results of the other labs. It's a very strange combination of data, not unlike other things in the past that did not pan out but only time will tell which way it will all resolve.
I think a Nobel is pretty much guaranteed at this point.
They'll make a ton of money either ways. Maybe not billionaire level, but they'll be venerated wherever they go, will be granted countless prizes, will sit on the boards of important companies, and have their pick of academic jobs - all of it entirely deserved, of course.
That's kind of my thinking - there is almost no chance they stumbled upon nirvana + did it in the best or most efficient way. There must be many possible optimizations to the chemical structure + fabrication techniques.
Is it possible that the inventors do not receive a dime of royalties?
I don't want to caveat a bunch of stuff with "IANAL" but I am not.
However, they have told me what I interpreted to mean that if someone improves it but uses it then they need to license the underlying patent. That just makes sense, it's required in order to implement their concept.
And in reverse the original company can keep doing whatever they want as long as it isn't covered by the referencing patent. Makes sense to me there too, if they come up with some other clever way to make it good enough more power to them. There's no reason for them to pay some other people who patented something they don't use.
If someone else is wrong and you know more, it would be great to share some of what you know so the rest of us can learn. But please don't post unsubstantive putdowns—they just make everything worse.
I hear you - but there are two problems with that argument.
(1) the internet is, to a first approximation, wrong about everything - so while posting "Wrong" tells us that you disagree with the GP, it doesn't tell us anything about who's actually right;
(2) shallow dismissals like "Wrong" have a degrading influence on the threads - they don't just encourage others to post more of the same, but worse. Basically, either your comments are contributing to improving the discussion culture or they're worsening it - there isn't really any level ground.
You can overcome both (1) and (2) by respectfully and substantively explaining why the other commenter is wrong. Then we all can learn something. Maybe (hopefully) even the other commenter can learn something.
If you don't want to do that or don't have time, it's better to post nothing than to just post "Wrong". That way you don't have a negative impact a la (2), and per (1), the internet is wrong about everything anyway, so you're basically just leaving things the way you found them.
Some indication of where to go for further information is preferable to none.
Consider that HN has ~5 million MAU and that a comment without clarity is confounding many people. Writing and communications generally is a service to the reader not the author.
If you don't have time to look up the precise source, a note along the lines of "I don't have time to find the direct link ..." or "I recall but cannot source ..." would help, along with where generally is the best direction to start looking.
Returning to that comment later to clarify/expand is also useful.
> let's say the paper completely checks out and they have discovered this miracle. How strong are the patent protections on materials science?
If it's literally a room-temperature superconductor, the present state of the law is irrelevant. China won't play by the rules. If Korea tries to corner it in the West, the rules will be replaced. (This would have been true had the inventor been French or American, too.)
Completely my train of thoughts. And it is not only china. Patents are based on laws and laws serve a society... If that does not work anymore (obviously after a long and broken process) the law changes.
Even without any change in law there'll be a seemingly endless swarm of "I can't believe it's not butter" almost the same varieties appearing and having to be knocked down (if possible) for infringement.
As long as there's profit to be made that exceeds the cost of delaying legal action and any eventual fines levied against what may turn out to be near bankrupt shell companies.
I hope they're as weak as possible. The worst possible outcome of a room temp superconductor being discovered is stifling innovation to make money off it
I'm generally anti-patent, but if there were any valid case for patents, this is it. Long, hard labor to discover something entirely new that has massive uses for society.
I'd rest my hopes on them not being dicks about it, not on them not getting benefit from it.
I honestly can't comprehend how anyone wound think having a monopoly on a material is valid. They will make money without being the cartel of the superconducting sector.
It wouldn't necessarily have to be a monopoly. There's such a thing as compulsory licensing and statutory licensing. That is, government(s) could decide that the patent is valid, but anybody can use it (without needing the patent holder's permission), as long as they pay the patent holder. And the government could decide to set the price.
Patent have reasonable terms. They will have a 20 year head start and then everyone will be competing with them. It's something that will be free in the same lifetime as the researchers, most likely. It's not abusive like current copyright that locks ideas for two generations.
Just to be clear: you're arguing that it's actually a good thing to deliberately slow down application of a revolutionary technology, so that people can make money?
yes, the time frame is limited enough that people can have an incentive to innovate, without being brutally oppressing. we're not locking civilization into the stone age for multiple generations while coros become filthy rich, it's just 20 years and then it's free game for everyone.
now, if it were to death + 75 years, or whatever inane number is copyright today, you'd hear a different story from me. but it is not.
Do you think that having a monopoly on some invention will prevent widespread usage of that invention? Like, "yeah, we invented fully clean carbonless way of making energy for 1% of current prices, but no one will have it"? Currently it just means that the creator of widely usable technology will have some percentage of money for others using his invention instead of others making all the money from his invention while he has a pat on the back.
Yes, I do think that, because that is literally what has happened before.
Companies price their offerings so that the global north will buy it, and the global south will have to pay proportionately extortionate amounts, with no relation whatsoever to the cost of production or research.
Yes, because this is partly why everyone keeps working so hard to produce a revolutionary technology. This is also the way it works with drugs, we have just been through a pandemic and a lot of people made a ton of money over their inventions, rightly so.
The pandemic where governments in the global south were fleeced by pharma companies, while the global north enforced patents and let millions of people die? Which, by the way, is still happening in a lot of poor countries that just don't have the money to afford the vaccines.
Not to mention the fact that the vaccines were developed with public money.
Yeah, the pandemic where rich countries had developed economies capable of producing such vaccines, which they then donated massively to poorer countries [1] or sold to other countries at reduced price. If these companies did not exist, or if they had no profit motive, no one would have had vaccines.
Why do people ask for raises though? It is somewhat rewarded, but some people don't work long and hard and still have more money. Those working long and hard, having a big useable result want to be rewarded a little more.
Aren't these results mostly luck driven though? Lee and Kim were lucky to go to that university in Korea, lucky that their professor researched superconductor theory, and lucky that the professor's theory was correct.
At the same time, other researchers around the world weren't so lucky.
But because we don't know what's going to pan out without trying it out, the other researchers are just as integral to the process of discovery.
Is it fair to reward Lee and Kim for their luck, and let everyone else get screwed? Wouldn't it be more fair to make sure everyone is appropriately compensated to begin with?
> Wouldn't it be more fair to make sure everyone is appropriately compensated to begin with?
Yeah, it would be the perfect solution. Problem is how to agree on that, currently we have a market telling everyone what their "appropriate" compensation is.
Why compete if someone will just steal your product?
It would just be a waste of time and money.
Imagine you spend two years and $100,000 to make a invent a clever handheld MRI for the super conductor just to have 100 companies Steal Your Design. You would have been better off watching Netflix
Inventions are still subject to patents. If you make a handheld MRI then others shouldn't be able to steal it from you unless they design their own. The material itself is what shouldn't have a patent on it.
Reading comprehension mate, cmon, don't assume bad intent.
Setting aside the needless dig, if you agree that there is utility in some patents, consider now that LK 99 could be example of a whole group or class of compounds, with thousands of permutations that requires significant work to discover, refine, and industrialize. If the compound needs an additive to be able to be manufactured at scale, should that also be unpatentable?
What about the scientist who spent 23 years to develop lk99? There's a lot of solutions within the existing patent system. If governments wanted, they could simply offer them a stupid amount of money for the patent and open source it.
I see where you're coming from, you want the scientists to get paid rather than just whatever large corp manufactures the compound.
And I don't disagree with you, I think the scientists should get rewarded too. I just don't want access to technology that could help people to get held back by money, as it does with pharmaceuticals.
Transistors, lithium batteries, leds, etc all enjoyed significant patent protection, but still changed the world.
I yearn for a Star Trek utopia as the next, but we do live in a capitalistic world where I hope that someone can enjoy outsized rewards for upending some previously insurmountable physical barriers.
"All the reactions are carried out under 10^-2 Pa"
OK, I know they mean 10^-2 of vacuum. But why not say that? "10^-2 Pa" isn't enough. Was this a full vacuum oven? Done in sealed quartz vials? Was there a purge, like argon, or just air?
If you look at the oven temperature profiles, you can see the ramp up time (0-2hr, 0-2hr, and 0-4hrs respectively), and the hold time, but the ramp down time isn't specified! There is no cooling rate, it just shows... a line drop off, with no end time. No label. This can be very critical. Were these just pulled straight out and air quenched? And were they kept under vacuum until at room temp or not?
Like, adding extra experimental setup details would take no time whatsoever to include in a paper and yet these researchers just don't do it. It's either pure fucking laziness or some sorta holier-than-thou gatekeeping that comes from theoreticians, or a combination, and it is the reason that replication is so hard in science right now. I would hope that no journal would accept this shit.