Hacker News new | past | comments | ask | show | jobs | submit login
Room-Temperature Superconductivity Achieved for the First Time (quantamagazine.org)
1235 points by theafh 8 days ago | hide | past | favorite | 353 comments

This is interesting for a couple reasons.

1) While I agree with the article's assessment that superconduction along huge distances is a likely no-go given pressures involved, it's not out of the realm of possibility that we could find a way to apply massive static pressure loads to small high-performance circuits.

2) If the pressures are 75% the Earth's core, that raises interesting geologic questions about what's going on in the Earth's core. Perhaps the model for Earth's magnetic field (or the material that causes it) will need to be adjusted to account for the possibility of naturally-occurring superconductors.

> superconduction along huge distances is a likely no-go given pressures involved,

Pedantic note: they said only for this material, not a no-go in general. Not sure if you caught that.

"Strained silicon" is a technique for achieving faster frequencies in modern CPU by applying force to the lattice with another deposition layer. I wonder if properties of the lattice under pressure can be instigated either by similar process, or even during phase change when creating the material.


Off the top of my head, that sounds like you're off in the pressure requirement by multiple orders of magnitude. The pressure used for the superconductor is enough to shatter diamond. Creating strain in a lattice like that would just shear the materials apart.

Material tensile strength can be orders of magnitude higher than compression. I wouldn't dismiss the possibility of microscopic level internally stressed materials reaching such pressures.

I also know nothing about material science so it's baseless speculation

I enjoyed this schizophrenic approach to scientific enquiry.

wow, I literally just had a very similar idea but what inspired me to think of it was the phenomenon of Prince Rupert's Drops https://www.youtube.com/watch?v=xe-f4gokRBs

I commented on the quantamagazine article but it's awaiting moderator approval

I wouldn't rule out extremely high internal pressures in static materials as a future research direction. Prince Rupert drops reach a pressure of 700 Megapascals, roughly 2-3 orders of magnitude less than the required pressure for this experiment. I'd imagine that more advanced processes and materials can already manage internal pressures substantially greater than the Prince Rupert Drop.

I can readily imagine some cross-discipline techniques achieving this. A little lithography, some selective dissolution like Gorilla glass, perhaps some tricks from the optical fiber industry, and bam, you have a fine coaxial filament which creates a region of 100's-of -MPa stress along the length of the fiber. Maybe even GPa, though that's tensile-strength-of-nanotubes territory.

Certainly it's years away but I dunno...This feels possible in an engineering sense.

>100's-of -MPa stress along the length of the fiber

It's all fine and dandy until it breaks open, along the length, and releases not just the stored mechanical energy, but also all the inductive energy from the flowing current of few million amperes.

Superconductors aren't magic. You can't just stuff a few million amps into a tiny wire. Just as there is a critical temperature, there's also a critical current density. While those two numbers are related, you'll need quite a bit more than "room temperature" superconductors to carry currents like that. And there's a temperature/current tradeoff: if the ambient temperature is close to the critical temperature, then a whisp of current can make it go normal.


I'd love someone smarter than me to do a back-of-the-napkin job on that. Are we talking a firecracker, a cherry bomb, a few kilos of TNT, or a MOBA?

1 meter of 1mm diameter wire with 1 million amperes amounts to 750kJ, or ~180g of TNT.

A superconductor blowing up across the length and distributing the energy doesn't seem as bad a whole transmission line's energy released at a point fault.

that also sounds like pretty ok risks for an under sea cable. though given one break you have to replace the entire cable might cancel it out.

You could break it into sections connected by liquid nitrogen superconductors, or maybe extremely heavy gauge copper ones.

Assuming superconductors replace existing wires, it won't be more power than the wires already carry, just in a smaller area.

My instincts tell me it would actually be much safer. Overheating wire would instantly lose superconductivity and vaporize into an open circuit. The same thing a fuse does.

Current wires have enough bulk to heat up and melt slowly. This is more opportunity to burn things and set stuff on fire. Vaporized plasma is low density which probably won't concentrate energy enough to light surrounding materials on fire. Plasma also disassociates quickly in air, again way less time for fires.

For mechanical stress, it may be sufficient to just ensure the stress is evenly distributed in all directions. This way the energy will be driven into shockwave and heat rather than motion. The wires might make a big bang but this isn't very harmful underground or high in the air on lines. Explosions are way less dangerous in open air than they are confined

Put two superconducting fibers next to each other with a thin insulating layer, or many fibers in a bundle.

Have the supercurrent in one direction matched by the same supercurrent in the other direction, alternations as closely spaced as possible.

The magnetic field will be confined, at least.

When the fiber breaks, join the entwined superconductors together (equivalently: have the insulator break down), and the current in the rest of the transmission line can continue flowing.

Coaxial variant also available.

It is hard to imagine a material strong enough to maintain residual stresses this high! Maybe in diamond? And what happens when it fractures and all comes apart? (unless it is contained in a very small volume so has bugger all potential energy)

Looks like some recent synthetic diamonds manage pressure in the TPa regime! 1 order of magnitude higher than that required for this experiment.


Materials do a little better under compression than tension. But yea, 267 GPa is a lot. The working stress for most high strength metals is about 1 GPa. With that being said under shock conditions for instance.

That said you can locally get huge stresses without out breaking the material.

I'm not so sure there's geologic implications. This kind of superconductivity only occurs in very specific compounds at very specific temperatures -- way cooler than what's going on in the earth's core.

We've found a few substances that are superconductive at very low temperatures, but it doesn't have any impact on geology because those conditions simply aren't found on earth.

Not necessarily, the point of the article is that there is a high/room temperature superconducting material. That doesn't mean it is naturally occurring or that there are any, but the understanding that you can achieve superconductivity at higher temperatures when at (significantly) higher pressures could lead to some new geological hypothesis to try and test.

The earth's core is north of 4000 degrees Celsius. It's a huge leap to imagine any superconductor at those temperatures, nevermind such a precisely engineered mix of elements occurring naturally. We're squarely in the realm of fantasy here, not geology.

"Fantasy" seems a little bit harsh to me, when our deepest borehole reaches only 12,2km down. Also our acceptance of continental drift/plate tectonics is rather young.

In other words: We have a few vague assumptions of what happens down there, based on our current understanding of physics, which we base on what we could veri-/falsify by experimenting with things which are accessible or at least visible to us.

Nobody has been down there, we have no pristine samples, nor the ability to get them(for now).

That we've never made a hole that deep does not mean we know nothing. We know a great deal by measuring seismic waves as they pass through the earth. That's how we know the earth has a solid inner core, despite the temperatures.

It's fantasy given what we understand about the earth and about superconductors. That's not impossible, but like any highly improbable claim it requires extraordinary evidence to take seriously. Otherwise it's like claiming there's a teapot between the earth and the sun, or some kind of invisible deity. We've left the realm of science and the natural world and entered the world of fantasy.

I didn't claim we know nothing, just vague. I'm aware of seismic surveys, even trying to use neutrino-observatories to look through.

We can't really predict when a volcano will erupt with what force. Same for when, where and why with earth quakes. That's much more accessible and of concern to us, yet we can't. Because our understandings are vague! Got it?

I think the only way to answer that question is to do more research, as opposed to squarely assuming that something is impossible simply because you didn't think of it.

Extraordinary claims require extraordinary evidence.

Earth's core is pretty hot IIRC.

And probably unlikely for the same material to occur naturally.

I don't know, this isn't some esoteric ceramic, it's hydrogen sulfide and methane, both of which occur in nature.

You may be interested to know that the same logic applied to concentrations of U-235 needed to run a fission reactor turns out to be false:


In modern industry, getting enough U-235 to run a reactor requires separating and enriching the ore, but this happened by chance in the distant past.

The key to why the Oklo reactor was possible was not that a natural process separated U-235 from U-238 from a Uranium ore, but that the reactor existed 1.7 billion years ago. U-235 has a half life of about 700 million years so the ratio of U-235 to U-238 in naturally occuring Uranium decreases over time.

U-235 was about 3.1% of Uranium in ores 1.7 billion years ago and is what can be used in some reactors today. Uranium ore today has only about %0.72, so an Oklo type natural reactor could not form on today's Earth. See [1] for a good description; the atlasobscura article is poor and factually incorrect in places.


But in a molten core, won't hydrogen sulfide and methane bubble out from the center, leaving only iron and heavier elements and compounds in the super high pressure depths?

Very likely. But more studies would be needed. The best phase diagram for H2S and methane that I could find only went up to ~140 barr. Though there were some interesting papers on clathrates and methane in the kilo-barr range, but I'd not think those were interesting for deep earth experiments. The deep earth is ~3.3 mega-barr, for comparison.



Well, how likely is that we even exist to begin with? And how likely is that we are alive because of a ball of hydrogen and helium really far away?

One very unlikely thing happening doesn't make another unlikely thing any more likely.

It doesn’t seem to just be one very unlikely thing, but a huge series of unlikely things...

The universe is pretty big, and there are lots of places where unlikely things didn't happen. When the next unlikely thing happens, it most likely won't happen here.

Carbon based life is unlikely in the universe and yet here we are. Watching tiktok. Just can't rule anything out.

There's an obvious observation bias here though.

Yes. Yes there is.

No, but it does show how significantly unlikely things can happen given enough time and billions of moving variables.

If you ask yourself this question, then 100%. You need to exist to even consider the unlikelihood of your existence.

But as far as I know the existence of naturally occurring superconductors is a completely independent probability, so it doesn't really make sense to use one to justify the other. Are there naturally occurring superconductors somewhere in the universe? I mean, without doing any calculations I'm tempted to say almost certainly yes. Do they exist somewhere in the Earth? As far as we know, extremely unlikely.

How likely is that an observer can observe their own unlikely existence given that the observer does exist against all the odds?

It's pretty likely. There's at least 1,000,000,000,000,000,000,000,000 stars in the universe, most of them have planets. Billion of years passed since the Big Bang. If some chemical process can create life it's very likely that somewhere it did.

Scientists who have done the math disagree. It's a major current issue.

It's extremely difficult to produce useful proteins from random DNA chains. As in, if I took all of the atoms in our galaxy, paired each with random DNA, and allowed you to pick just one (blindfolded), only one of those DNA strands would contain the DNA necessary to produce a valid/useful protein. Literally every other atom has garbage DNA/proteins. The human body contains between 80,000 and 400,000 proteins.

DNA looks far more like "information" than it does like random bits written to disk. It's analogous to trying to find an x86 program of at least 160 instructions that computes a valid mathematical function by randomly splatting 1s and 0s to disk and then "running" the "code". Eh, maybe it'll eventually happen, but you can see how hard it actually is in practice. Heat-death-of-the-Universe hard.

Give a long enough timeframe, even unlikely things will happen. However, the Universe isn't very old—which is why it's a major current issue.

You do realize that natural selection iterates through these same sorts of garbage proteins at a rate of trillions and trillions of bacteria per year for millions of years?, and with bias toward previously existing functional structures? It's a pretty potent optimization process, given the timeframes and scale. and the uncertainty is enough. It isn't equivalent to that, because there can also be incremental progress made toward a functional protein, unlike code.

Doesn't that presuppose we already _have_ life, and thus is irrelevant to the question, or am I misunderstanding?

There is a question of where does the first self-replicating molecule come from, and how do we get to DNA from that, and how do we get the diversity of proteins that we see today.

Creating random DNA sequences and showing they don't produce 'useful' proteins has nothing to do with any of those questions (and how do we even know they are not useful?)

I think the main stumbling block for me is the perception of time scales. It is impossible for me to say, with certainty, any information regarding time scales of a billion years and the randomness which permeates the evolutionary process. What we see and know are the winners of the race, not the mountains of failures.

> It's extremely difficult to produce useful proteins from random DNA chains.

I think we all agree about that.

> Scientists who have done the math disagree. It's a major current issue.

The problem is that you can do the correct calculation or a wrong calculation. For example the number 160 is probably too high. Some of the current proteins have a 160 or more amino acids, but there are shorter proteins, and there are some useful short amino acid chains with only 20 amino acids.

> if I took all of the atoms IN OUR GALAXY

There's between 100 and 200 billion galaxies in the observable universe. There were billions of years to do the choosing - how many times per minute am I allowed to do it?

> The human body contains between 80,000 and 400,000 proteins.

Good thing first life wasn't homo sapiens then (and probably wasn't using DNA).

The framing you have here is an attractive one, but I don't think it makes much sense in the context of reproducing molecules.

There is no reason to posit random DNA chains.

The statement that "the number of DNA chains that produce valid/useful protein in the space of all possible DNA chains is vanishingly small" seems reasonable (however I'm not sure how we would know these chains are the only ones that produce valid/useful proteins).

The idea that we need to choose randomly from the space of all possible DNA chains is not reasonable.


Once we have a reproducing molecule, we expect to see a multitude of valid reproducing molecules as descendants of that first molecule. We expect (at least some of) these descendants to eventually be extremely different from the original molecule, and by their nature valid reproducing molecules.

Once we have a reproducing molecule (like DNA) that creates other molecules (like RNA and proteins) we can expect the same of its descendants, and the descendants' by-products.

If these molecules form an ecosystem, where the reproduction of one relies on the validity of the other, the only succesful variations within the ecosystem will be valid variations of the ecosystem.


The space that we are choosing from is not the space of all possible DNA chains, it is the space of all DNA chains adjacent to existing valid chains (or chains in a valid ecosystem).

It's analogous to taking a valid x86 program that can reproduce, randomly adding/removing/mutating some bits on reproduction (with low frequency, very quickly, and in a ginormous space - think on the scale of molecules in the Earth's oceans), and asking if that new program is also valid. And then, after millions of years of this, asking if one of the programs is a valid mathematical function.


There are still big questions here. Questions like "how do we get the first reproducing molecule?" and "is DNA likely to arise once you have reproducing molecules, or just one out of many options?"

None of those questions give reason to evoke the number of all possible variations of DNA as evidence that the variation we see in proteins is somehow unlikely.

Once we know that there exists one valid DNA/protein system (which we do, as it exists), and we know that variations of DNA/protein ecosystems can be functional (which we do, as we've observed it), it is reasonable to expect a multitude of valid, functional DNA chains, and the proteins produced by them.

Like you, I can imagine hundreds—maybe thousands—of ways to resolve these issues.

That's hardly relevant though, what matters are resolutions that actually work.

I agree that we are (very) likely to find the mechanisms involved, but so far, we haven't. In fact, we don't even have a theory on how DNA was originally developed, or how non-functional DNA/proteins self-replicate, or really anything at all. We only have the end product (which does—as you point out—work). The question is how did it get there, and previous hand waving about a huge, old Universe and random chance isn't sufficient.

It's going to have to be something similar to what you (and other commenters) describe: mechanisms that preferentially and relatively quickly produce valid, self-replicating DNA/protein chains. To date, no one has found anything even close to that.

You see the difference between this argument and what you wrote above though?

Perhaps I'm reading your original post too strongly, so please correct me if so.

In the first post you compare the number of valid DNA chains to the space of all possible chains, you mention the number of different proteins in the human body, and you draw an analogy to a random sequence of bits forming a valid program.

None of these talk to the probability of a reproducing molecule arising through physical processes, nor do they talk to the probability of DNA as a descendant of that original reproducing molecule (or potentially multiple original molecules).

I get that you understand the gaps in our knowledge of how these systems came to be; my point is that your original argument is misleading in the exact same way you claim the argument

"Billion of years passed since the Big Bang. If some chemical process can create life it's very likely that somewhere it did."

is a

"kind of hand-wavey statement [that] seems to convince most people. Universe is hella-old, and really big. Ergo, incredibly rare stuff has happened basically infinitely many times. Life everywhere, etc."

(this was a reply to a different post, but I think it holds to the comment you originally replied to).

In fact, I find the argument that "things reproduce, and have been reproducing for a long time in a large environment, so we expect to see complexity in those things" much more reasonable than "most random arrangments of this molecule are useless, and we can see lots of useful arrangements, therefore time and randomness can't explain them".

We're discussing how to get those "things that reproduce" in the first place. I agree that once you have useful things that reproduce, it's easy to keep it going. Similarly, if I have a running copy of Linux, I can use the tools (and source code) to produce another copy of Linux.

But how do we get the first copy, the "original reproducing molecule" as you put it?

The usual explanation is that the "first copy" arose randomly, and then kept going. Do you believe that? I suspect not—but most people do.

We know that it can't have been random (which is the argument I gave, and I suspect you agree with). We should tell people "it wasn't random, something about the fundamental nature of these molecules caused better and more complex molecules to emerge." But we have no mechanism for that, just a (valid) belief that it has to be true.

I think we should find those mechanisms, and simultaneously, stop telling people that random chance + vast universe + long timespan is sufficient.

> The usual explanation is that the "first copy" arose randomly, and then kept going. Do you believe that?

I believe a variation of that.

I believe that the first copy arose through physical processes.

Evoking 'randomness' is unnecessary and misleading.

Do you not believe this?

To my knowledge, we don't yet have a mechanism for how such a molecule came in to being (though there are ideas).

We also don't have any reason to think that it must be some random single choice from a large possibility space, and we don't have any evidence at all that it could have arisen from non-physical processes (what would that even look like?).

This is what I mean by random: no DNA sequence is privileged over any other, and no (known) physical process produces anything but random DNA sequences (excluding, of course, copying already useful DNA sequences).

DNA has about as much structure as bits on a disk (with coding for one of 20 amino acids as the "bits"). No DNA sequence is more likely than any other to exist.

I think that means we need to identify strong physical processes that produce useful DNA strands; you, apparently, aren't as concerned about it. Maybe you're right, but from where I'm sitting, it's hard to imagine what those physical processes might be since the strands they must produce are extremely, unimaginably rare in practice.

DNA is basically information[0], and we literally have no example of a chemical process producing valid DNA information, nor is it all obvious how such a process might work in practice. In the past, large amounts of time + equal likelihood of producing random DNA was considered sufficient to think "well, useful DNA stands could appear randomly." We now know that's extremely unlikely to the point of being effectively impossible, statistically-speaking.

[0] https://www.nature.com/scitable/topicpage/dna-is-a-structure...

But some DNA sequences are privileged over others!

The mechanisms for producing new DNA sequences involves copying existing DNA sequences. Thus, the ones that exist are privileged over the ones that do not exist (yet), and the adjacent sequences are privileged over a random sequence.

> No DNA sequence is more likely than any other to exist.

It is far more likely for a DNA sequence very similar to my own to exist than a random sequence.

> we need to identify strong physical processes that produce useful DNA strands

We have already identified those processes! We know quite well how the machinery of DNA replication works.

If we care about the first DNA molecule to ever exist it's a very different question. We don't need to find a physical process that produces a modern DNA molecule from 'raw parts', rather one that takes not-quite-DNA and converts it into DNA.

> it's hard to imagine what those physical processes might be since the strands they must produce are extremely, unimaginably rare in practice.

Can you imagine slightly simpler DNA? Say just a bit shorter? What's the simplest molecule we might still call DNA, that is reproducing? Can we imagine machinery that would produce that?

I think it's very reasonable to think such machinery could exist, even if we don't know the exact mechanisms involved. We know that RNA can self-reproduce, and also produce proteins, so it's reasonable to think that machinery to produce RNA strands could evolve to produce DNA strands (for example).

The only involvement randomness has in this whole process are (relatively) rare and infrequent changes to self-replicating molecules, and (potentially) the initial formation of a self-replicating molecule.

It is irrelevant how many possible DNA sequences there are, or how much information is stored within them, as we know new sequences are derived from previous ones.

> If we care about the first DNA molecule to ever exist it's a very different question. We don't need to find a physical process that produces a modern DNA molecule from 'raw parts', rather one that takes not-quite-DNA and converts it into DNA.

We haven't found that, and apparently aren't even close. We don't even have any idea what something like that might look like, or even more critically: given all the incredibly, insanely, unbelievably rare DNA sequences that exist in the world today, why is such a fundamental process capable of producing them not abundant as fuck already? Where'd it go? Why is this process even a mystery in 2020? It should be ubiquitous; in fact, all of the primordial soup mechanisms should be. Certainly that's what we expected when the theory was developed, and it hasn't panned out.

Anyway, I think we've exhausted this topic. Thanks for commenting.

> We haven't found that, and apparently aren't even close.

We do have ideas! Specifically, within the RNA world hypothesis, the transition period is called the virus world [0]

> given all the incredibly, insanely, unbelievably rare DNA sequences that exist in the world today,

We have a good understanding of where diversity comes from, I'm not sure what point you're making here.

> why is such a fundamental process not abundant as fuck already. Where'd it go? Why is this even a mystery?

I don't think anyone thinks this process need be 'fundemental', though it definitely is pivotal. It only really needed to happen once, and then DNA was off reproducing and spreading by itself. That said, it looks like viruses converting RNA to DNA could still be happening today.

In general, we don't expect novel self-reproducing molecules to arise today, because they are out-competed by existing self-replicating molecules. In a world where nothing is replicating the first replicator is king. In today's world a brand new replicator is food for something else.

> Maybe it's possible that your romantic view of how this all happens (pseudo-Darwinian circa 2020) isn't telling the whole story?

I don't think I, or anyone else really, is claiming to tell the whole story - just that we have good reason to believe this came about through physical processes, and no evidence to believe... well I'm not sure what else there could be.

What are you proposing?

[0] https://en.wikipedia.org/wiki/RNA_world#Evolution_of_DNA

Interesting discussion. I’d like to ask a sincere question:

Wouldn’t a system A that is capable of encoding another complex system B, need to be as complex in order to encode all the information in the result?

It’s like a compression algorithm, you can encode the information, but the complexity level of that information is still there (also the difficulty in compressing the information increases very fast - exponentially or maybe even factorially).

So if the most basic protein sequence requires so many bits of information, wouldn’t anything capable of producing that (in a non-random manner) also require at least that level of information (if not more).

It doesn’t matter what process we call systems A and B.

So it seems if randomness doesn’t solve the problem (because math), then the only conclusion is that there is a fundamental requirement for intentionality.

It's possible for a simple thing to encode something more complex, deterministically.

The prime example is The Game of Life - simple rules from which complex behaviour emerges.

This idea of information is one we're putting onto the system, not some inherent attribute. Yes, the encoding of a protein needs to have enough information to produce that protein (or a family of proteins), but that says nothing about the process that created the encoding.

For example, a strand of RNA can be spliced in many different ways to create many different proteins [0] and this process can go weird in many ways. New sequences will arise from this process, even though they weren't 'intended' to.

[0] https://en.wikipedia.org/wiki/RNA_splicing

The Game of Life doesn't produce complex behavior from simple rules.

The complex behavior comes from a large enough random starting state combined with a very low minimal required complexity to see something interesting. Also, even for a short interesting run of local behavior, the game never produces a stable behavior that grows in complexity beyond the initial information encoded in the random state. (i.e. if there is a bubble of cool stuff happening somewhere on the 2d plane, something usually interferes with it and destroys that pattern - like waves in the ocean, even when the energy curves combine to form a wave once in a while, they are limited and temporary).

So the Game of Life is actually an example that the system is limited to the information encoded in the initial starting state.

In the starting state there is either:

- a large enough random search space (i.e. a million random attempts with a 100x100 board might get something cool looking)

- intentionality (a person can design a starting state that can produce any possible stable system)

Yes, and useful proteins are basically the equivalent of "oscillators" or "spaceships" in the game of life. But must runs of the game of life are not oscillators or spaceships, just like most proteins are useless.

That's why the "initial condition" is so important, and why DNA is so important: without a good "start state", you get useless results—just like in the game of life.

What we are trying to find is not Conway's rules for the game of life, but this: how do we produce useful starting states (DNA) with a physical system? And more importantly, how do we create those starting states preferentially (i.e. non-randomly)?

We still need a model for how useful DNA (which corresponds to the "initial state" in the game of life) gets created. And we have no model for that right now, other than assuming unique random initial states are continually occurring and letting the law of large numbers eventually "find" winners.

For DNA, at least, it could have come from RNA (as per the link in my last post).

While I don't think the pre-biotic problem is solved at all, we have a lot more models of how it could have happened than you seem to credit - this is after all a huge research area.

For example, here is one [0], and here is a whole journal issue on the subject [1].

I found these by searching for 'evolution of DNA' and 'evolution of RNA'.

Now, these models all include some randomness, but in no way does anyone assume "unique random initial states are continually occurring... letting the law of large numbers eventually "find" winners"

The models show plausible environments where pre-biotic synthesis of RNA (or RNA pre-cursors) can occur, and stabilise.

This model you keep bringing up - randomly selecting a molecule from all possible combinations of atoms and saying 'enough time will get you one that works' - is not mentioned anywhere that I have seen. Perhaps some lay-people (of which I am definitely one!) believe it, but as you point out it is so obviously implausible it falls down on first inspection.

There are other models (lots of them!) and they don't rely on this pure randomness.

[0] https://phys.org/news/2018-01-chemical-evolution-dna-rna-ear...

[1] https://www.mdpi.com/journal/life/special_issues/evolution-R...

Minor side note, but most runs of the game of life actually will produce spaceships and/or oscillators, even starting from a random configuration. (Initialize a 100 x 100 box of cells randomly, and you're virtually guaranteed to get several gliders flying off of the resulting mess.)

This assumes a perfect random distribution though.

What if amino acids and proteins are in fact likely to arise naturally and in favorable circumstances?

There was an article recently (~1-2 months) on HN about a supercomputer/AI discovering new chemical pathways for part of this process, but I can't seem to find it anymore. I think it was about forming amino acids.

I'm no expert on this subject (the opposite really, I've slept through chemistry), but my experience with large-scale simulations has been that a surprising number of them converge to the same final result given the same starting parameters even if most processes within them are perfectly random. The bigger the simulation, the more likely they are to give you stable results. And the universe is pretty damn huge.

So I like to believe the creation of the foundations of life is in fact more-or-less inevitable in our universe, in turn increasing the chance of useful proteins etc. forming.

> And the universe is pretty damn huge.

And that kind of hand-wavey statement seems to convince most people. Universe is hella-old, and really big. Ergo, incredibly rare stuff has happened basically infinitely many times. Life everywhere, etc.

Only…it's actually not that old, we have some idea how big it is (not that big, just lots of space between atoms), and thanks to computer science, we're pretty good at analyzing issues surrounding computation complexity.

And as it turns out, the DNA-to-protein pathway is much much much less likely that our initial hand waving made it seem.

I'm not saying it didn't happen, I'm saying with our current level of knowledge we have no idea how. The math based around being old and big doesn't work. So we need better math, more studies, etc. and less hand waving.

>Ergo, incredibly rare stuff has happened basically infinitely many times.

This wasn't my argument though. In fact it was the complete opposite.

I was proposing that it was in fact likely and thus pretty much guaranteed to happen in a large universe, as opposed to being unlikely but still likely given a large enough universe.

So we're working with different assumptions here.

In fairness I put my assumption way at the beginning of my post, so it probably got forgotten about by the end of it. Quoting myself:

> What if amino acids and proteins are in fact likely to arise naturally and in favorable circumstances?

We haven't yet conclusively found all of the pathways these can arise, and we continue to discover more. People just tend to assume it's pretty unlikely. I'm not so sure.

Comparing amino acids to proteins is a category error, almost akin to comparing individual x86 instructions to a full x86 Linux kernel binary. The level of complexity increase is not just in size, but it's a fundamental different thing altogether.

The amount of information (via DNA) needed to create a useful protein from the 20 amino acids is absolutely incredible.

So…finding more potential (note: not demonstrated) pathways to create amino acids ex nihilo does literally nothing for producing viable DNA strands and proteins. DNA and proteins are a totally different problem, and we've made basically no progress at all, and the more we look at it, the less likely it seems.

And then people (not you per se) hand wave about the size of the Universe to explain the problem away. I think we should instead accept the problem exists and work to solve it.


Separately, we have no known examples of any natural process producing what we, as humans, would call "information." DNA is much closer to information than any other concept, to the point where if we were sent something similar to DNA from space in, say, a radio transmission, we would absolutely assume intelligent life had made that transmission.

That is, with our current knowledge, it takes something vaguely "intelligent" to product the kind of information we have in DNA. Maybe such processes exist, but this is an absolute far cry from producing amino acids from chemical precursors, which are not information-like at all (and thus, it is unsurprising that we can do it).

I have found your comments on this thread very intriguing. The computational analogy applied to DNA and proteins is apt for me. Also, this strikes me as a potential resolution to the Fermi Paradox. What do you think?

Well, they're obviously related in that we really need to discover/determine how useful DNA came to be, starting with just the primordial soup. If we can get more accurate numbers for the Drake equation, that would certainly go a long way towards explaining the paradox.

I brought it up on HN because relatively few people seem to know this is still a problem, and progress on resolving it has been slow.

The many worlds interpretation of quantum mechanics increases the combinatorial space to play in for some otherwise unlikely seed event by tremendous amounts.

Also, we know stuff like the smallest observed polymerase, but we don't know what the smallest functional one would be that could have into it.

We also have self-replicating pure RNA systems, though the components aren't abundant. But this is just what scientists came up with in one effort trying to make one to prove it is feasible:


But why assume a leap directly to proteins, by definition a long chain of amino acids? Couldn't we have started with self-replicating peptides and incremental improvements?

Peptides are just short proteins, and no, we have no idea how to get them either (though it's obviously easier).

Also, it's not that what I've called bad/garbage DNA doesn't produce proteins, it's that the proteins produced are useless: they don't "do" anything. There's no obvious reason why DNA "extension" should produce useful proteins over un-useful ones, at least, no mechanism that we have discovered so far.

Instead of accepting a theory of incremental improvement that "sounds nice", waving our arms about random chance and an old, vast Universe and going "yup, that's how it happened!", let's try to develop testable mechanisms and validate them.

I'm asking for more rigor while simultaneously shooting down "random chance", "plenty of time", and hand waving about the Law of Large Numbers. We've done the math and we need far more effective, directed mechanisms than random chance to produce useful DNA sequences.

> Peptides are just short proteins

This is very misleading. Proteins are defined by a minimum level of complexity, being strictly higher than peptides.

Fascinating. And of all life on Earth, how did human consciousness arise?

We're basically virtual machines/entities/minds stuck inside biological bodies, and the majority of us are at odds with nature and every other living organism on Earth.

I guess I'm more interested in how that happened than how life started, both seem equally incomprehensible to me, though.

I've seen postulated mechanisms by which increasing levels of self modeling get selected for, demystifying paradoxes has some examples.

Well, that's why an RNA first world is most likely. RNA can act catalytically as RNAzymes.

Large number comparisons are difficult for humans to comprehend. If you simplify life to a DNA strand 256 nucleotide long (for the sake of math comparison) - then the search space is 4^256. To comprehend how large a search space this is watch 3Blue1Brown's explanation https://youtu.be/S9JGmA5_unY?t=38

A re-paraphrasing of the Fermi paradox.

Maybe humanity will be able to check the entire universe for life. I love imagining that scenario and wondering what the reaction would be when we find none.

The fermi paradox isn't even a real paradox.

We don't have enough information on how likely or unlikely life is to occur because we haven't even been able to replicated it yet.

It could be that life is so unlikely that it we're lucky to be here.

This comical notion has always struck me as scientism at it's finest. There is nothing "pretty likely" about it considering a) we have no idea how abiogenesis occurs, and b) we have literally zero evidence of any form of life outside of our rock.

You cannot conclude it is very likely just because there are lots of stars. That's not how math work.

the second sentence does not support the last one. The last one is independent and known as the anthroposophic principle. I.e. even if it was extremely unlikely by some measure, it still happened. Whereas there's no indication whether 10e24 stars were a number far past the goal post, or relevant at all on your back of the envelope.

It rather seems that you (and the 4500-6000°C comment) were commiting to a fallacy of large numbers. You might as well write a friggin' fantastillion, unimaginable, zomg!, and you wojld still convince roughly the same gaillion number of people. But it's good to hear the details.

4000-6000 doesn't sound much at all in years for me for example, but it used to.

For what it's worth, the classic Drake Equation is that old "back of the envelope" calculation they put together to try to ask this very question - what's the likelihood life evolved?

The problem with the drake equation is it had several variables for which we had no flippin idea what the values were. For example, supposing there are a bajillion stars (we know that much), then we have to multiply against how likely it is for those stars to have a planet - and at the time, the likelihood of planets was completely unknown.

That at least, is something that's changed in the last decade, thanks to new telescopes. We've addressed one of the Drake Equation's big unknowns: we now can hazard a guess that planets are extremely likely.

Sadly there are enough other unknowns that we still can't make any sort of conclusions, but at least the betting odds are going up.

bruv this some gamblers fallacy shyt

well it is, but it's an accurate response to somebody already engaged in it. It is the GP who finds it "probably unlikely" without any indication of actual probabilities, chiefly rounding down from a haphazard guess, after all.

4500-6000 °C

One of the interesting conversations about a new technology is figuring out the 'ladder' of applications for the tech as more or larger versions become available in a particular price class.

Portable MRIs are one application, once you can make a big enough chunk of the stuff, but earlier than that, couldn't you use small pieces of superconductor in communications equipment? Power supplies, ranging from IC power regulators up to mains power transformers?

While I agree that it is an interesting finding, the lede of "at room temperatures" buries the fact that using a median of 330-360 gigapascals as a measure of the earths' internal pressure equates to a pressure of 8223639745.0093 pounds per square foot. (Rough calculations based on 3,300,000 to 3,600,000 atm for the Earths' inner core)

[Edit to correct the maths] = 5325785739.6251 pounds per square foot.

Couldn't the pinch effect be used to crush two superconducting filaments into each other? My rough calculation is something like 5 MA(in each conductor, spaced by 1mm) would do the trick, assuming that much current and B don't interfere with superconductivity (I have no idea).

Looks like the highest current ever achieved is 100 kA. A mere 100 fold increase and we should be good to go.

Current and magnetic field are damping effects on a material's superconductivity. Breach the conductance threshold in particular and you get catastrophic material failure

The core is computing the ultimate question.

This wouldn't exist on our earth, but there are superconducting neutron stars out there (at pressures far greater than we can produce on earth). I wonder if this has any implications for those.

For anyone interested, I recommend a BBC Horizon documentary about the earth's core called "The Core". I found about it on HN and quite enjoyed it.

https://news.ycombinator.com/item?id=4035519 https://news.ycombinator.com/item?id=13486960

Sounds interesting, but the video doesn't seem to be available anymore. I'd be interested if anyone has a working link!

You can download it from this archive.org collection (s2011e12): https://archive.org/download/BBCHorizonCollection512Episodes

It's also super interesting for engineering reasons:

- Maglev trains can be built at much lower cost

- Other levitating transportation via the Meissner effect

- Quantum computing

- Entertainment, theme parks

Lots of things I can think of.

GP was talking specifically about the approach in the article, which requires massively high pressures to sustain, not superconduction in general. I don't think the required pressures are really practical for any of the things you listed.

Sure, it's not ready for those use cases yet, but the fact that it has been achieved at room temperature is a milestone, and those use cases are things that would benefit from room temperature superconductivity.

No, it is not. Even though the goal at "room temperature" is technically met, it is misleading, because the goal "room temperature" meant superconducting under quite normal conditions to be of practical use. And when you have to apply that immense pressure it means we did not really come closer to superconducting in the normal world.

Still a success, yes, but probably not a milestone, unless that discovery leads to other findings, but I do not see anything indicating such.

> they stress that the newfound compound will never find its way into lossless power lines or frictionless high-speed trains

You can transport electricity from wind mills from far, far away. The wind will always blow somewhere, so the storage problem goes away.

Or solar power transported from the south to the north of US or EU.

Superconductors are the ultimate joker in green energy.

You can do this today and if you get energy for free, then the transportation-losses are of little significance (they are just a few percent to begin with).

Superconductors won't play a role in energy transport until their installation and operational costs over their lifetime are less then the installation, operational costs and losses of legacy conductors.

> they are just a few percent to begin with

To be more precise, losses are exponential with regard to distance. “A typical loss for 800 kV lines is 2.6% over 800 km”[0], which would be close to 60% losses between antipodes[2].

I think that’s still plausibly worth doing, given renewable prices[1], but it’s not great if you can avoid it.

[0] https://en.m.wikipedia.org/wiki/High-voltage_direct_current

[1] that said, I know nothing about the cost of making HV power grids, and it is entirely possible the costs I am ignorant of would make it a bad idea

[2] but not, as autocorrupt first wrote, antipopes — cstross has nothing to do with the global electricity supply, to my knowledge

That's an interesting hypothesis. So are there layers in the Earth's mantle that are superconductors?

For the metric people, from the actual paper abstract:

> Here we report superconductivity in a photochemically transformed carbonaceous sulfur hydride system, starting from elemental precursors, with a maximum superconducting transition temperature of 287.7 ± 1.2 kelvin (about 15 degrees Celsius) achieved at 267 ± 10 gigapascals.

It absolutely boggles my mind that a science web site, reporting on a science breakthrough would report the result in such antiquated units. WTF, people?

Or at least they could have added figures in both units. Makes it easier for everyone

I think it’s because the writers have this thing called “people skills”.

Or they are just americans that forgot about the rest of the world or don't even care

Quanta's tweet for this included the rather nice pun "But there is a crushing caveat" which gave me a chuckle after reading the article.


So far, there's always been a caveat. Kudos to Quanta for being up front with it in the headline and being punny with it too.

Achieved at 267 GPa, 15 deg C.

Paper: https://www.nature.com/articles/s41586-020-2801-z

"In a diamond anvil" is the "in mice" of superconductivity.

Has diamond anvil superconductivity been achieved with other conductors?

Yes, many times, but at significantly lower temperatures.


Maybe this is a crazy idea. If we put superconducting cables around Mars at say +/-50 degrees latitude, can we create a planetary magnetic field to prevent atmospheric removal from the solar wind? Would the atmosphere start to thicken?

I posed this to an EM friend and he estimated 1,000,000 Amp-turns would be required. Never checked his math but that current seems plausible with a good superconductor, plus it's cold on Mars!

You don't need to recreate a full planet-sized magnetic field for that, you can more feasibly put a much smaller dipole at the L1 Lagrange point that will deflect the solar wind sufficiently so that it avoids Mars.

Will it need to carry an onboard rocket motor to counter the solar wind pressure?

You could put it slightly closer to the sun than L1 and then it could stay in equilibrium.

I hadn't heard of this idea. Very cool!

Mars doesn't need a magnetic field, without it it takes hundreds of millions of years to lose its atmosphere. It's probably much easier to just top it up a bit every few million years.

The magnetic field could help shield Mars from radiation. This recent paper on building an artificial martian magnetic field with a few thousand kilometers of superconducting wire looks fun.


You'd make it cheaper by building air cleaning factories around the globe and get rid of pollution on Earth.

It should be easier to not pollute than to add more industry to clean it up.

Cleaning up is something a relative small number of people can accomplish.

Not polluting is something everyone in the world has to buy in to.

The first option seems much easier to organize.

Not really. We needed to industrialize to get the point we are now where we can manufacture clean energy. Coal and oil got a humans here from their simplicity (burn it) so that we can now have clean alternatives like solar panels and wind turbines.

So no, cleaning up isn't easier or we wouldn't be fucked right now. By the time we get to Mars as a colony we will, by necessity, have the tech to produce clean energy and will not be able to rely on oil. Thus starting fresh without polluting from the onset - something that was impossible on our own world.

Not in the world/culture/political/economic system we have developed into.

There are only two ways to get things done.

1. Companies develop/supply a "product" because they can profit from it.

2. Companies are forced or subsidised by governments to supply a "product" they wouldn't otherwise be able to do for a profit.

It seems that only option 2 would be applicable here.

Earth is a single point of failure.

Escapism can be a precursor to failure too. I'm not being cheeky. I think that we're not open enough about the fact that we're jumping ship, because we're not sure we can take care of this one. That's important, because it carries serious concerns for how well we'd do on Mars.

I see it not as escapism, but as steps to learn how to take care of a limited resource. Large-scale geoengineering will be necessary sooner or later on Earth. However, it almost certainly has failure modes that we don't know about, and won't know about until we can experiment with it. Testing the effects on Earth, with nearly 8 billion people, is wildly reckless. Testing the effects on Mars or Venus, though costlier to implement, has the advantage of not risking those 8 billion lives.

That's an absolutely fair point.

One thing to watch out for though, is that it's only half of the rationale behind escapism: we're concerned about our own stewardship of the Earth, but there's a very real concern that a disaster could happen to it that's not of our own making. An asteroid, for example.

Mars would protect us from several categories of these, and becoming multi-stellar would protect us from several more.

There's no jumping ship. Every other planet / moon in our solar system is far worse than the worst projections for the Earth for thousands of years.

That was what i've always think as well. If theres a problem with our culture, that we keep passing through generations, we could even buy more time if we escape, but the problem might go with us.

If we dont fix the problem in the core before we colonize other planets, we will become a interplanetary virus working as a parasyte and killing our host with time.

Hedging is not a bad idea. We clearly have the resources for it. And to make matters worse, every day we are discovering something new about the instability we are wreaking upon this planet. Why on earth would you argue against hedging in the situation we are so badly ignorant we might see a planetary collapse within a minor variation sufficient to wipe us all out?

I agree that hedging is a good idea. It's just that earth-sustainability requires resources as well.

Fair. But I think there is a lot we can learn about sustainability and what humans really need by putting them on an empty planet with zero natural resources except the minerals in the ground and some frozen water.

Mars will build up to sustainability, while on earth we try to cut back to sustainability.

I think the idea of "jumping ships" is silly, this is technically impossible in the near future. But urgency of expanding existed long before any recent events. To survive we need to spread.

To survive we need to take care of ourselves and our environments. Reducing dependence on earth is a small aspect of that, in my mind.

Or, we could live in harmony with what is.

You say this as if it's easy to add another biosphere - like it's some project resource.

It's the only place we can inhabit, so we should focus on protecting it.

Building an ecosystem of O'Neill cylinders is probably more viable means of space colonization than colonizing mars.

This is a major life altering question. If we can't learn to all live together peacefully, learn to help each other solve problems here on earth; what makes anyone think that we will survive in space & beyond...

I love everything about space exploration, but I'm not naive enough to believe its the solution to our problems here on earth. One might argue its a distraction from our ongoing global humanitarian crisis.

People everyday are dying from lack of food, water, shelter, etc...

What time & money is spent on solving the galaxies mysteries, could be brain power backed capital used to solve our dire terrestrial affairs. IMHO...

Food for thought...

It's a bit of a false dichotomy I think. Injustice causes our global humanitarian crises and while rocket scientists are very smart they're probably not the best people to solve corruption and injustice.

There are 7 billion people on earth. That gives us a bit of leeway to multitask. We can have activists and rocket scientists solving different problems.

> What time & money is spent on solving the galaxies mysteries, could be brain power backed capital used to solve our dire terrestrial affairs

This is often repeated but makes no sense at all. The time and resources humanity as a group spends on those activities corresponds to 0.1% of our output. Infinitely more is wasted on mundane stuff like manufacturing cars, golf carts, office jobs or reading online forums.

The world and our lives are very interconnected. You can't just focus on a single task nor would that actually lead to any better progress.

I suggest you read this: https://lettersofnote.com/2012/08/06/why-explore-space/

> "In 1970, a Zambia-based nun named Sister Mary Jucunda wrote to Dr. Ernst Stuhlinger, then-associate director of science at NASA’s Marshall Space Flight Center ... Specifically, she asked how he could suggest spending billions of dollars on such a project at a time when so many children were starving on Earth. Stuhlinger soon sent the following letter of explanation ... later published by NASA, and titled, “Why Explore Space?”"

I think you started great, but didn't follow through on your own thought.

"learn to help each other solve problems"

The most important problem we need to solve, is how to survive in the universe, where any large rock falling from the sky can wipe out our civilization, if not the whole mammalian branch.

We don't have to abandon efforts to improve human life on this planet while trying to expand to more than one.

The root causes of many (but not all) of our major problems are political or social in nature and can't be solved by throwing money or engineers at them. Also, there are many people on Earth, "we" can work on multiple problems simultaneously.

> Earth is a single point of failure

Only in the sense that a climbing harness is also a single point of failure.

It very much is. So is your rope, carabiner and belay device.

So is the universe.

Let's terraform Terra.

More urgently: Could we use superconducting magnets to keep the magnetic field of earth stable so it doesn't collapse and flip in the coming years?

Are you asking if we can bump the existing magnetic field dynamos into stability? It's an interesting idea but given the size and power of earth's natural field, I'm pretty sure that the math would work out to more energy than all of earth's resources could provide or something like that. You'd be literally manipulating the core of the earth.

Right. The next emergency is clearly earth's magnetic field.

Does anyone have a key understanding of why these extreme pressures enable superconductivity?

I'm trying to imagine how these extreme pressures would modify bond angles, nuclei spacing, and constraints on motion. And also tring to understand how that's affecting the behavior and creation of the Cooper pairs.

Also a handwavy explanation aimed at people who aren't familiar with a lot of the concepts of condensed matter physics. Please salt with the knowledge that current theory can't fully explain how high temperature superconductors work. And that I'm not an expert in the field.

First concept, virtual particles vs real particles. When we talk about "an electron flowing through metal" it is not actually a single electron. As it moves, the electron will move into an atom, another gets knocked out. But in aggregate it "acts like" a single particle with possibly different properties from a real electron. For example it likely has a different mass. A virtual photon will travel slower than a real one. And so on.

Virtual particles can even correspond to things that aren't particles at all! For example sound is a wave, and quantum mechanically is carried by virtual particles known as phonons. These act exactly like any other particle, even though they are actually aggregate behavior of lots of other things!

A Cooper pair is a pair of things (eg electrons) that are interacting enough that they have a lower energy together than they would apart. Electrons are fermions, with half spin. They have a variety of properties, such as the Fermi exclusion principle. A bound pair of electrons becomes a virtual particle with an integer spin. Which makes it a boson, which behaves differently.

Superconductivity happens when charge is carried by bosons.

In high temperature superconductors, it looks like the electrons are at least partially bound by interaction with phonons. The high pressures change the speed of sound, and therefore change how easily Cooper pairs form.

Everything that I said above was based on what was known a couple of years ago.

However https://phys.org/news/2019-04-mechanism-high-temperature-sup... claims that there is now a theoretical explanation for high temperature superconductors, and the best guess above doesn't seem to be the real explanation. The real explanation being that the feature/TIQ-7651_unique_schema_version

Remember what I said about particles having a different mass moving through materials? The binding together of electrons through interaction with phonons seems to depend on the mass of the electrons. When you squeeze the lattice, that mass decreases.

You seem to have a copy-paste error.

> Fermi exclusion principle

This is the Pauli exclusion principle, in case someone wants to learn more on the subject.

>In high temperature superconductors, it looks like the electrons are at least partially bound by interaction with phonons. The high pressures change the speed of sound, and therefore change how easily Cooper pairs form.

Interesting. Do we know if it possible to disrupt superconductivity with sound at just the right frequency? And the converse, has anyone tried to enhance superconductivity by using sound (i.e. increase either the critical temperature, increase the current density, etc)?

HTS will stop superconducting once a certain amount of energy is added. This energy can be in the form of heat, magnetic field, electric current, or mechanical strain. If you keep the HTS colder you can accommodate more of the other forms of energy. I do not know if sound would disrupt superconductivity but since sound is a form of energy it is very likely.


Like another poster already said, both lower temperatures and higher pressures confine the movements of the atoms, so either of them can cause the same phase transitions.

Besides this new example with superconductivity, there are other more familiar phase transitions with the same behavior.

For example, with most liquids, in order to solidify them you may either cool them or compress them.

The same if you want to liquefy gases, either cooling or compressing has the same effect.

Room-temperature superconductivity at very high pressures has been predicted many years ago, but it is very nice to have an experimental confirmation.

Handwavey explaination: the particles pair up because of vibrations in the crystal. It's modeled like a bunch of metal balls on with springs between them and you can imagine tapping one end and sending a wave of vibrations through. However, these springs are a bit non-linear and so I imagine that if you pack the atoms closer together then you will change the spring constant.

The other knob you can use to change the vibrations is the mass of the balls. This can be done by using different isotopes of the same element and the critical temperature goes down with mass.

the particles can't pair up, because equal charges repell. That's still the virtual model.

I don't quite remember my intro to electrical components, though it's a quick read for the basics. The GP obviously knows about atom models and band gap.

The paradox bit is that, as far as I can tell pressure is roughly equivalent to heat, and heat equals decreased intrinsic conductivity. But if I imagine that high preasure restricts the absolute motion of particles, that would equal decreased resistance (like an idealized fixed suspension for your swing, that doesn't take energy out of the system).

Since Hydrogen is involved, I suppose there's a channel of Hydrogen rumps without any electrons, and the high preassure is needed to keep the hydrogen from moving apart and recombining outside the ensemble. Surely this involves some form of entanglement? Which I imagine as a kind of clockwork, all cores spinning in unison.

Haha, I have no idea what I'm talking about.

Type I superconductors (the ones people understand) happen because electrons pair up.

The equal charges participate on the problem, but do not stop the electrons from pairing up. There is a lot of virtual particle exchange between them, but that's how forces happen. It's more correct to say that the crystal mechanically constrains the electrons into pairs than that the electrons pair with virtual particles.

(IANAP, but this one topic I have studies a little.)

I have a different explanation. Think of a material as a sponge for heat. When I squeeze the material, I raise the temperature, and that causes heat to leak out. The temperature of the material doesn't really tell me how much heat is in it, so this experiment is suggesting that it is the heat itself that prevents superconductivity.

Now a superconductor is just a conduit for electrons that doesn't generate heat. We know from Landauer's principle that heat is only generated when you destroy information. If I take a pair of entangled electrons, those electrons contain exactly one bit of information (in the von neumann sense). If I cannot add energy in excess of the energy required to disentangle them, then that bit of information is never destroyed.

Whether or not a given interaction between the electron pair and the substrate has enough energy to disentangle them is not a function of temperature, it is a function of the actual energy that may be imparted to my pair. Which is proportional to the actual heat in my material, rather than its temperature.

Overflow in the physics simulation code.

I wonder if this has implications for fusion. In fusion, you have trememdous pressure outwards from the containment vessel due to the magnetic field "squishing" the plasma to a density that's enough to promote fusion and redirect scattering forces back inward.

Of course to create that magnetic field, you have to have superconductors very close to this superheated plasma. So the first thing to relate to this is there may be less cooling required.

The second thing is, and this is both a stretch AND possible a huge gain, but perhaps the required pressure for the superconduction can be provided by the inherent pressure of the fusion reactor core.

That's an interesting idea. I wonder if that couldn't be used to bootstrap a room-temperature superconductor: cool it down, start a magnetic field, and let the magnetic field reaction compress the material so that you can let it heat back to room temperature. Could an external or room-temperature field also possibly be enough?

Yeah you'd still need a bootstrap to get the reaction going, and therefore generate the pressure. The question is, does this material have two superconduction modes: one at low temperatures and pressures, and another at high temperatures and pressure?

The idea would be to create a superconductor with a pressure/temp curve that is amenable to the pressure/temp curve of the starting sequence of a fusion reactor.

And why can't I find any research on using the magnetocaloric effect to achieve superconductivity? It's an obvious idea to try.

Writing about scientific papers and using Fahrenheit should be a punishable offense.

Yep they actually converted the original units from the paper to Neanderthal ones.

It is really weird they wrote about 'atmospheres' too...

how are europoors so consistently butthurt about fucking units of temperature

My favorite quote:

> Over the course of their research, the team busted many dozens of $3,000 diamond pairs. “That’s the biggest problem with our research, the diamond budget,”

I like this, too:

> “It’s clearly a landmark,” said Chris Pickard, a materials scientist at the University of Cambridge. “That’s a chilly room, maybe a British Victorian cottage,” he said of the 59-degree temperature.

I worked one summer at a laboratory called the Geballe Laboratory for Advanced Materials (GLAM).

Maybe that's why the acronym is what it is. I wonder what GLAM's diamond budget is.

Sounds like prices for gem cut diamonds, which brings in the whole DeBeers monopoly pricing. I wonder why manufactured or rough cut diamonds couldn't be used.

Having worked in materials science research, the problem is that you generally need a very bespoke specific thing crafted for you by a professional lab supplier, and that is expensive.

It is a diamond press, the diamonds have to be cut to shape and they also have to be transparent. Industrial diamonds aren't transparent enough.

Modern lab-grown diamonds are higher quality than even the finest gem-quality natural diamonds.

From what I understand, the issue with lab-grown diamonds is that they can't really grow them beyond a certain size at this time. I think clear ones are a couple of carats, and colored ones are roughly double that. I could be off a bit. Regardless, that's not huge, though I don't know what size they require. Maybe it's sufficient.

Why do the diamonds need to be transparent?

People interested in high-pressure chemistry follow what's happening with spectroscopy, UV/Vis or infrared. Fig. 3 on p. 376 has the Raman spectrum.

So they can probe/measure the contents with light, usually in laser form.

Contents are completely enclosed by diamond to hold the pressure. No way to probe otherwise as far as i know.

X-rays are also widely used to probe high pressure matter within diamond anvil cells

Here is an example at an ESRF beamline https://www.esrf.eu/home/UsersAndScience/Experiments/MEx/ID1...

Don't forget neutrons! Not as quick to measure, and not so good with very small samples, but well-suited to combinations of extreme environments beyond just pressure, such as temperature, magnetic field, voltage gradient etc. https://www.isis.stfc.ac.uk/Pages/Pearl.aspx

Amusing to see for an x-ray crystallographer that the neutron scattering coefficient for tungsten carbide is actually lower than for pure carbon. Neutrons are weird.

My hypothesis is that they have imperfections in them that lead them to be structurally weaker than ones crafted by the natural pressures of the Earth.

This is actually not true! This was was true maybe 15-20+ years ago but since then lab diamonds have gotten very pure. So pure that now Big Diamond markets their flaws as "natural characteristics" that make their diamonds unique.

Pretty funny if you ask me.

That being said, lab diamonds are not necessarily that cheap, depending on the dimensions and qualities necessary.

Yeah but they still have defects. There is no such thing as a defect-free material, it's thermodynamically unstable. Under stress, the defects (voids, dislocations etc.) lead to crack propagation and the diamond is kaput.

Distinguish "perfect" from "better than natural diamonds".

Lab-grown are better than natural in the sense that they have a lower density of defects. But they still have defects which means they will break under sufficient stress. It doesn't matter if the diamonds they source is lab-grown, it will still break under sufficient stress.

That's the thing, lab grown diamonds have fewer defects than natural ones. They're too perfect, according to the diamond/jewellery industry.

I'm disputing the notion that in order to prevent the breakage of the diamond anvil cells that the researchers used, they should source lab-grown diamonds. I'm saying that those will too break because they contain defects. Lab-grown is better, but still not defect-free.

Saw the title and thought "oh, I'll bet it is at some insane high pressure or some other exotic condition". Clicked through to an image of a diamond anvil. Not disappointed.

I'm actually really happy they put the catch at the top of the article.

It's so annoying to read science articles about how X will revolutionize Y, but you have to dig through the comments section to find out why it won't work.

Most new research findings only have very specific applications. It's only groundbreaking when something can (eventually) be implemented in real life for a reasonable cost.

Quanta does some of the best science reporting. They have a knack for making highly complex and technical concepts accessible to the general public without sacrificing accuracy.

They don't get points for putting it at the top of the article - all they're doing is correcting their own misleading title.

And yes, I say misleading. Technically true but misleading, because their omission is absolutely critical to the nature of their breakthrough and as you implied, anyone who knows the first thing about room temperature superconductors will want to know if the material has a drawback stopping it from functioning outside of a strictly lab setting.

I wonder whether it might be possible to create a material using this and carbon nanotubes worked through it. With the idea that the nanotubes can create and hold the pressure for the superconductor to operate.

This was achieved in a diamond anvil cell, a sample smaller than a millimeter is squeezed between two diamonds in a special apparatus. This is how you achieve world record high pressures, not even remotely in the realm of possibility for engineering a material.

The bulk modulus of superhard phase nanotubes is 462 to 546 GPa, even higher than that of diamond.

Engineering a wire under pressure whose whole length is compressed by a structure made out of carbon nanotubes is clearly difficult, but seems theoretically possible. It is very likely beyond our current engineering capabilities. But in principle it is a technology that we could try to develop.

For example at low temperature you assemble a wire that has a high thermal coefficient of expansion down the center of the wire. Then the superconductor around that in a ring. Then a carbon nanotube sheathe around that which traps things. Then when it warms up the core squeezes the superconductor against the sheathe and you get the pressure.

That sounds like a very large version of https://en.wikipedia.org/wiki/Prince_Rupert's_drop.

When you hit one end the wire with a hammer, or drop something on it, the entire wire might explode.

Yeah. When something is under that much pressure, the failure modes tend to be..interesting.

Maybe if you’re writing science fiction or have a time machine, not if you’re an engineer. For starters the bulk modulus is about deformation, not strength. Second issue is that you’d be creating explosive cable. Go watch some youtube videos of tempered glass exploding and then imagine what a material under 1000 times the pressure would look like in failure.

The whole excitement about room temperature superconductors is getting rid of the difficulty of cooling, the difficulty of this pressure requirement is easily much worse.

First you demonstrate that it is possible.

Then you make it feasible.

And then you make it practical.

One step at a time.

Eh, similar (not quite as impressive) but high-temperature superconductivity had been observed with sulfur+pressure before, it's unclear how you would make that practical.

Hopefully as we pile on more and more examples of different materials exhibiting superconductivity, we can understand it well enough to find a practical high temperature one.

Though I think we could go a long way with the liquid nitrogen temp superconductors now on the market. It's still going to be a real chore to design around, but it's got to be a lot easier to deal with liquid N2 than liquid He.

That sounds like writing code

It's not far off. Engineers see if it's doable for a given budget, physicists show and analyse that it's possible at all.

Similar to a CS paper showing a new algorithm e.g. sorts with x% less swaps than quicksort, it might not actually lead to a performance increase on real hardware.

You find some sort of physics that do most of the work for you.

Can someone much smarter than me clarify if this is supportive of, or related to, the US Navy patent from several years ago for Piezoelectricity-induced Room Temperature Superconductor? [0][1]

[0] https://patents.google.com/patent/US20190058105A1/en [1] https://eandt.theiet.org/content/articles/2019/02/us-navy-pa...

This finding is real science. The patent you cite is unrelated and 100% bullshit.

For example, it states preposterous sentences such as this:

"The fact that the fine structure constant can be expressed as a function of (2e) shows how important the notion of electron pairing is in the composition of the Universe, and gives credence to the theory that the fundamental cosmic meta-structure may be thought of as a charged superfluid, in other words, a superconducting condensate."

This guy is a scammer who was able to bamboozle his patent attorneys, presumably he gets some incentive to publish patents?

Is that preposterous? The idea that "spacetime"(?) may have superconductive properties doesn't seem facially outlandish, but again I'm not a fancy scientist. He is an awarded US Navy scientist though, and these patents were specially requested by Navy brass, so I'm not inclined to think he's a scammer that slipped one by his attorneys.

I have met someone who got a stupid patent — IIRC it was a two bit binary adder — because the patent lawyer messed up what he sent to them.

He didn’t check before it was filled because he didn’t care (the point of the patent was “we needed a patent protected system to be granted a license to a codec”), and it was granted anyway.

If you look up the history of this patent, the first few times it was submitted it was denied, then some US general comments and basically rubber stamps it through.

From what I can see using a very basic understanding of superconductivity, in that patent superconductivity is achieved by:

1) Taking a wire, and mechanically inducing a wave of lattice vibrations 2) Firing a pulse of electricity down the wire to "ride the wave" of superconductivity produced

whereas normal superconductors (including this one) produce superconductivity because the first positively charged electrons in a wave of current "pulls up" the negatively charged lattice behind it as it passes over creating a wave that attracts the second wave of electrons traveling behind (which in turn does the same to the third.)

Interesting. I've been putting 0 ohm resistors in my schematics for years. I wonder what took these guys so long. :)

no you didn't. you put 1 nano-Ohm resistors in best case scenario, not zero.

No. It's actually a zero (1) ohm resistor. You can buy one too. https://www.digikey.com/en/products/detail/koa-speer-electro...

(1) Please read the terms of the data sheet carefully and consult your local engineer before using. Certain restrictions may apply. 0 ohm not available in all jurisdictions.

There is no such thing as a pure zero resistor in this Universe. A resistor made from a single atom it would still require energy to move the electron. It's how physics works in this Universe. Hence no, you'll never have a zero Ohm resistor.

1. Smiley face

ii. In schematics != in reality

c. Lots of things are within a useful margin of error of $impossible_standard

δ. This entire post is about superconductors

> There is no such thing as a pure zero resistor in this Universe.

Looks like unnouinceput is just in the wrong jurisdiction. :)

I was like...okay but how I can get it to dissipate 0.25W? :D

From the product description:

> Features: Anti-sulfur, ...

Imagine the shipping warning labels.

I believe his smiley face is one to one exchangeable for sarcasm tags.

267Gpa isn't unreasonable to achieve 'in the home'.

For example, if you had a rod of the superconducting material 1mm in diameter, and you wrapped it tightly with a strand of Kevlar (tensile strength 3.6GPa) until it became 100mm diameter, then the center would have a suitable pressure...

Tensile strength is pulling. The diamond anvil needs compressive strength.

A cigarette lighter has compressed gas inside... Yet the walls need tensile strength...

This is true, however in this case the tensile strength of the casing needs to be focused into a small area and that is done by translating the pressure from a large area into a small one.

This doesn't seem right, how does a rod go from 1mm in diameter to 100mm in diameter under pressure? Also, how does one "wrap tightly"? The material would fracture under the stress of the first wrap.

You're right - I assume that kevlars spring constant is << the rod's.

If that's the case, you just set the tension in the thread so the Kevlar is near its breaking point, and start winding, like winding thread onto a bobbin...

Not being a mechanical engineer, I don't have an intutive feel for how the additive nature of the pressure develops as you describe. I would have thought that the outer layers would start to compress the inner layers, relaxing some of their stress. Is the a mech-e 101 type of link you could pass along to help get me up to speed? Or maybe you are saying that 3.6 GPa times the ratio of 100 mm to 1 mm gets us to the 360 GPa mark? So you don't need to have layers of Kevlar as your "anvil", you could use something else more rigid, and just wrap one layer of Kevlar around it and develop the needed pressure. Thanks.

Dumb question:

Shouldn't "room temperature" include in its definition about one atmosphere of pressure?

I can tell you that I wouldn't want to experience "room temperature" 30,000 km above sea level.

Room temperature plus one atm pressure is a separate term, STP - "Standard temperature and pressure".

(Of course it's hilariously not "S" at all, because no one can agree whether the "T" is 0 deg C or 25 deg C or somewhere in between. But when we're talking about room-temperature superconductors that's not really a big deal.)

Okay, should we be talking about "STP semiconductors" then? I doubt anyone envisioning 'room temperature superconductors' was envisioning a diamond anvil.

there's the concept of "standard temperature and pressure" , "stp" , which iirc is 0 celcius at the same pressure as there is at sea level.

Room temperature means room temperature. Room temperature and 1 atmosphere of pressure means room temperature and 1 atmosphere of pressure.

> As the temperature of a superconductor rises, however, particles jiggle around randomly, breaking up the electrons’ delicate dance.

hasn't this pretty much always been the crux - to fixate the particles?

while it's a nice achievement experimentally-speaking, is it really that surprising that materials immobilized by enormous pressures but at elevated temperatures exhibit the same behavior as if they were immobilized via chilling to near 0K? either condition is impractical to attain outside of a specialized lab.

Note: 59 degrees Fahrenheit is 15 degrees Celsius

I'm wondering, doesn't the concept of temperature change with pressure? For example, homemade fusion reactors operate at low pressures but at a temperature higher than the sun. So, if this circuit is operating at a high pressure, then isn't surrounding room temperature relatively low? At the mechanical level, a substance held between carbon diamond anvils isnt free to change momentum or kinetic energy due to the kinetic impacts of room temperature air, vapor, or plasma.

Second question, do virtual particles have the same Casimir effects in this apparatus, as we would see in low pressure experiments? If you're interested, also checkout the results published recently on measuring the Casimir force. Reference: “Casimir spring and dilution in macroscopic cavity optomechanics” by J. M. Pate, M. Goryachev, R. Y. Chiao, J. E. Sharping and M. E. Tobar, 3 August 2020, Nature Physics. DOI: 10.1038/s41567-020-0975-9

There was a previous article posted on HN about a new record being set for speed of sound. The medium that transmitted the sound was high-pressure hydrogen. I don't know what rabbit hole I stepped into, but it let me to an article about solid hydrogen acting as a metal and becoming an awesome super conductor at room temperature.

I thought we had reached this record with solid hydrogen. But I cannot find this online anywhere. The material this article goes over is hydrogen-carbon-sulfide. The previous record for superconductivity was using hydrogen-sulfide.

I wonder what other materials can be added to lower pressure at room temperature and maintain superconductivity. Lithium? Nickel? Copper?

Metallic hydrogen is predicted to be a room-temperature superconductor. And if my understanding is correct, it is related to the speed of sound ("phonons") in that material. Stressing the material further increases the speed of sound.

However, metallic hydrogen is pretty much theoretical at that point, although it looks like we are getting closer. No other material that I know of had achieved >0°C superconductivity before.


WTF using Fahrenheit in a scientific article??

I'm completely convinced that future superconductors and stronger magnets will lead to desktop sized fusion generators.

It's so exciting because there's no obvious limitation to why it wouldn't work. If superconductors improve at current rates we'll just end up there in 30-40 years naturally.

To think in another generation humans might have cheap limitless power, it's tantalizing

Room-temperature but earth core pressure, lol. But still very exciting indeed!

> This page doesn't exist

> At least not in this universe.

Good one. Did they take it down?

Edit: looks to be back now.

Still down for me.

same for me

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact