1) While I agree with the article's assessment that superconduction along huge distances is a likely no-go given pressures involved, it's not out of the realm of possibility that we could find a way to apply massive static pressure loads to small high-performance circuits.
2) If the pressures are 75% the Earth's core, that raises interesting geologic questions about what's going on in the Earth's core. Perhaps the model for Earth's magnetic field (or the material that causes it) will need to be adjusted to account for the possibility of naturally-occurring superconductors.
Pedantic note: they said only for this material, not a no-go in general. Not sure if you caught that.
"Strained silicon" is a technique for achieving faster frequencies in modern CPU by applying force to the lattice with another deposition layer. I wonder if properties of the lattice under pressure can be instigated either by similar process, or even during phase change when creating the material.
I also know nothing about material science so it's baseless speculation
I commented on the quantamagazine article but it's awaiting moderator approval
Certainly it's years away but I dunno...This feels possible in an engineering sense.
It's all fine and dandy until it breaks open, along the length, and releases not just the stored mechanical energy, but also all the inductive energy from the flowing current of few million amperes.
A superconductor blowing up across the length and distributing the energy doesn't seem as bad a whole transmission line's energy released at a point fault.
My instincts tell me it would actually be much safer. Overheating wire would instantly lose superconductivity and vaporize into an open circuit. The same thing a fuse does.
Current wires have enough bulk to heat up and melt slowly. This is more opportunity to burn things and set stuff on fire. Vaporized plasma is low density which probably won't concentrate energy enough to light surrounding materials on fire. Plasma also disassociates quickly in air, again way less time for fires.
For mechanical stress, it may be sufficient to just ensure the stress is evenly distributed in all directions. This way the energy will be driven into shockwave and heat rather than motion. The wires might make a big bang but this isn't very harmful underground or high in the air on lines. Explosions are way less dangerous in open air than they are confined
Have the supercurrent in one direction matched by the same supercurrent in the other direction, alternations as closely spaced as possible.
The magnetic field will be confined, at least.
When the fiber breaks, join the entwined superconductors together (equivalently: have the insulator break down), and the current in the rest of the transmission line can continue flowing.
Coaxial variant also available.
That said you can locally get huge stresses without out breaking the material.
We've found a few substances that are superconductive at very low temperatures, but it doesn't have any impact on geology because those conditions simply aren't found on earth.
In other words: We have a few vague assumptions of what happens down there, based on our current understanding of physics, which we base on what we could veri-/falsify by experimenting with things which are accessible or at least visible to us.
Nobody has been down there, we have no pristine samples, nor the ability to get them(for now).
It's fantasy given what we understand about the earth and about superconductors. That's not impossible, but like any highly improbable claim it requires extraordinary evidence to take seriously. Otherwise it's like claiming there's a teapot between the earth and the sun, or some kind of invisible deity. We've left the realm of science and the natural world and entered the world of fantasy.
We can't really predict when a volcano will erupt with what force. Same for when, where and why with earth quakes. That's much more accessible and of concern to us, yet we can't. Because our understandings are vague! Got it?
You may be interested to know that the same logic applied to concentrations of U-235 needed to run a fission reactor turns out to be false:
In modern industry, getting enough U-235 to run a reactor requires separating and enriching the ore, but this happened by chance in the distant past.
U-235 was about 3.1% of Uranium in ores 1.7 billion years ago and is what can be used in some reactors today. Uranium ore today has only about %0.72, so an Oklo type natural reactor could not form on today's Earth. See  for a good description; the atlasobscura article is poor and factually incorrect in places.
But as far as I know the existence of naturally occurring superconductors is a completely independent probability, so it doesn't really make sense to use one to justify the other. Are there naturally occurring superconductors somewhere in the universe? I mean, without doing any calculations I'm tempted to say almost certainly yes. Do they exist somewhere in the Earth? As far as we know, extremely unlikely.
It's extremely difficult to produce useful proteins from random DNA chains. As in, if I took all of the atoms in our galaxy, paired each with random DNA, and allowed you to pick just one (blindfolded), only one of those DNA strands would contain the DNA necessary to produce a valid/useful protein. Literally every other atom has garbage DNA/proteins. The human body contains between 80,000 and 400,000 proteins.
DNA looks far more like "information" than it does like random bits written to disk. It's analogous to trying to find an x86 program of at least 160 instructions that computes a valid mathematical function by randomly splatting 1s and 0s to disk and then "running" the "code". Eh, maybe it'll eventually happen, but you can see how hard it actually is in practice. Heat-death-of-the-Universe hard.
Give a long enough timeframe, even unlikely things will happen. However, the Universe isn't very old—which is why it's a major current issue.
Creating random DNA sequences and showing they don't produce 'useful' proteins has nothing to do with any of those questions (and how do we even know they are not useful?)
I think we all agree about that.
> Scientists who have done the math disagree. It's a major current issue.
The problem is that you can do the correct calculation or a wrong calculation. For example the number 160 is probably too high. Some of the current proteins have a 160 or more amino acids, but there are shorter proteins, and there are some useful short amino acid chains with only 20 amino acids.
There's between 100 and 200 billion galaxies in the observable universe. There were billions of years to do the choosing - how many times per minute am I allowed to do it?
> The human body contains between 80,000 and 400,000 proteins.
Good thing first life wasn't homo sapiens then (and probably wasn't using DNA).
There is no reason to posit random DNA chains.
The statement that "the number of DNA chains that produce valid/useful protein in the space of all possible DNA chains is vanishingly small" seems reasonable (however I'm not sure how we would know these chains are the only ones that produce valid/useful proteins).
The idea that we need to choose randomly from the space of all possible DNA chains is not reasonable.
Once we have a reproducing molecule, we expect to see a multitude of valid reproducing molecules as descendants of that first molecule. We expect (at least some of) these descendants to eventually be extremely different from the original molecule, and by their nature valid reproducing molecules.
Once we have a reproducing molecule (like DNA) that creates other molecules (like RNA and proteins) we can expect the same of its descendants, and the descendants' by-products.
If these molecules form an ecosystem, where the reproduction of one relies on the validity of the other, the only succesful variations within the ecosystem will be valid variations of the ecosystem.
The space that we are choosing from is not the space of all possible DNA chains, it is the space of all DNA chains adjacent to existing valid chains (or chains in a valid ecosystem).
It's analogous to taking a valid x86 program that can reproduce, randomly adding/removing/mutating some bits on reproduction (with low frequency, very quickly, and in a ginormous space - think on the scale of molecules in the Earth's oceans), and asking if that new program is also valid. And then, after millions of years of this, asking if one of the programs is a valid mathematical function.
There are still big questions here. Questions like "how do we get the first reproducing molecule?" and "is DNA likely to arise once you have reproducing molecules, or just one out of many options?"
None of those questions give reason to evoke the number of all possible variations of DNA as evidence that the variation we see in proteins is somehow unlikely.
Once we know that there exists one valid DNA/protein system (which we do, as it exists), and we know that variations of DNA/protein ecosystems can be functional (which we do, as we've observed it), it is reasonable to expect a multitude of valid, functional DNA chains, and the proteins produced by them.
That's hardly relevant though, what matters are resolutions that actually work.
I agree that we are (very) likely to find the mechanisms involved, but so far, we haven't. In fact, we don't even have a theory on how DNA was originally developed, or how non-functional DNA/proteins self-replicate, or really anything at all. We only have the end product (which does—as you point out—work). The question is how did it get there, and previous hand waving about a huge, old Universe and random chance isn't sufficient.
It's going to have to be something similar to what you (and other commenters) describe: mechanisms that preferentially and relatively quickly produce valid, self-replicating DNA/protein chains. To date, no one has found anything even close to that.
Perhaps I'm reading your original post too strongly, so please correct me if so.
In the first post you compare the number of valid DNA chains to the space of all possible chains, you mention the number of different proteins in the human body, and you draw an analogy to a random sequence of bits forming a valid program.
None of these talk to the probability of a reproducing molecule arising through physical processes, nor do they talk to the probability of DNA as a descendant of that original reproducing molecule (or potentially multiple original molecules).
I get that you understand the gaps in our knowledge of how these systems came to be; my point is that your original argument is misleading in the exact same way you claim the argument
"Billion of years passed since the Big Bang. If some chemical process can create life it's very likely that somewhere it did."
"kind of hand-wavey statement [that] seems to convince most people. Universe is hella-old, and really big. Ergo, incredibly rare stuff has happened basically infinitely many times. Life everywhere, etc."
(this was a reply to a different post, but I think it holds to the comment you originally replied to).
In fact, I find the argument that "things reproduce, and have been reproducing for a long time in a large environment, so we expect to see complexity in those things" much more reasonable than "most random arrangments of this molecule are useless, and we can see lots of useful arrangements, therefore time and randomness can't explain them".
But how do we get the first copy, the "original reproducing molecule" as you put it?
The usual explanation is that the "first copy" arose randomly, and then kept going. Do you believe that? I suspect not—but most people do.
We know that it can't have been random (which is the argument I gave, and I suspect you agree with). We should tell people "it wasn't random, something about the fundamental nature of these molecules caused better and more complex molecules to emerge." But we have no mechanism for that, just a (valid) belief that it has to be true.
I think we should find those mechanisms, and simultaneously, stop telling people that random chance + vast universe + long timespan is sufficient.
I believe a variation of that.
I believe that the first copy arose through physical processes.
Evoking 'randomness' is unnecessary and misleading.
Do you not believe this?
To my knowledge, we don't yet have a mechanism for how such a molecule came in to being (though there are ideas).
We also don't have any reason to think that it must be some random single choice from a large possibility space, and we don't have any evidence at all that it could have arisen from non-physical processes (what would that even look like?).
DNA has about as much structure as bits on a disk (with coding for one of 20 amino acids as the "bits"). No DNA sequence is more likely than any other to exist.
I think that means we need to identify strong physical processes that produce useful DNA strands; you, apparently, aren't as concerned about it. Maybe you're right, but from where I'm sitting, it's hard to imagine what those physical processes might be since the strands they must produce are extremely, unimaginably rare in practice.
DNA is basically information, and we literally have no example of a chemical process producing valid DNA information, nor is it all obvious how such a process might work in practice. In the past, large amounts of time + equal likelihood of producing random DNA was considered sufficient to think "well, useful DNA stands could appear randomly." We now know that's extremely unlikely to the point of being effectively impossible, statistically-speaking.
The mechanisms for producing new DNA sequences involves copying existing DNA sequences. Thus, the ones that exist are privileged over the ones that do not exist (yet), and the adjacent sequences are privileged over a random sequence.
> No DNA sequence is more likely than any other to exist.
It is far more likely for a DNA sequence very similar to my own to exist than a random sequence.
> we need to identify strong physical processes that produce useful DNA strands
We have already identified those processes! We know quite well how the machinery of DNA replication works.
If we care about the first DNA molecule to ever exist it's a very different question. We don't need to find a physical process that produces a modern DNA molecule from 'raw parts', rather one that takes not-quite-DNA and converts it into DNA.
> it's hard to imagine what those physical processes might be since the strands they must produce are extremely, unimaginably rare in practice.
Can you imagine slightly simpler DNA? Say just a bit shorter? What's the simplest molecule we might still call DNA, that is reproducing? Can we imagine machinery that would produce that?
I think it's very reasonable to think such machinery could exist, even if we don't know the exact mechanisms involved. We know that RNA can self-reproduce, and also produce proteins, so it's reasonable to think that machinery to produce RNA strands could evolve to produce DNA strands (for example).
The only involvement randomness has in this whole process are (relatively) rare and infrequent changes to self-replicating molecules, and (potentially) the initial formation of a self-replicating molecule.
It is irrelevant how many possible DNA sequences there are, or how much information is stored within them, as we know new sequences are derived from previous ones.
We haven't found that, and apparently aren't even close. We don't even have any idea what something like that might look like, or even more critically: given all the incredibly, insanely, unbelievably rare DNA sequences that exist in the world today, why is such a fundamental process capable of producing them not abundant as fuck already? Where'd it go? Why is this process even a mystery in 2020? It should be ubiquitous; in fact, all of the primordial soup mechanisms should be. Certainly that's what we expected when the theory was developed, and it hasn't panned out.
Anyway, I think we've exhausted this topic. Thanks for commenting.
We do have ideas! Specifically, within the RNA world hypothesis, the transition period is called the virus world 
> given all the incredibly, insanely, unbelievably rare DNA sequences that exist in the world today,
We have a good understanding of where diversity comes from, I'm not sure what point you're making here.
> why is such a fundamental process not abundant as fuck already. Where'd it go? Why is this even a mystery?
I don't think anyone thinks this process need be 'fundemental', though it definitely is pivotal. It only really needed to happen once, and then DNA was off reproducing and spreading by itself. That said, it looks like viruses converting RNA to DNA could still be happening today.
In general, we don't expect novel self-reproducing molecules to arise today, because they are out-competed by existing self-replicating molecules. In a world where nothing is replicating the first replicator is king. In today's world a brand new replicator is food for something else.
> Maybe it's possible that your romantic view of how this all happens (pseudo-Darwinian circa 2020) isn't telling the whole story?
I don't think I, or anyone else really, is claiming to tell the whole story - just that we have good reason to believe this came about through physical processes, and no evidence to believe... well I'm not sure what else there could be.
What are you proposing?
Wouldn’t a system A that is capable of encoding another complex system B, need to be as complex in order to encode all the information in the result?
It’s like a compression algorithm, you can encode the information, but the complexity level of that information is still there (also the difficulty in compressing the information increases very fast - exponentially or maybe even factorially).
So if the most basic protein sequence requires so many bits of information, wouldn’t anything capable of producing that (in a non-random manner) also require at least that level of information (if not more).
It doesn’t matter what process we call systems A and B.
So it seems if randomness doesn’t solve the problem (because math), then the only conclusion is that there is a fundamental requirement for intentionality.
The prime example is The Game of Life - simple rules from which complex behaviour emerges.
This idea of information is one we're putting onto the system, not some inherent attribute. Yes, the encoding of a protein needs to have enough information to produce that protein (or a family of proteins), but that says nothing about the process that created the encoding.
For example, a strand of RNA can be spliced in many different ways to create many different proteins  and this process can go weird in many ways. New sequences will arise from this process, even though they weren't 'intended' to.
The complex behavior comes from a large enough random starting state combined with a very low minimal required complexity to see something interesting. Also, even for a short interesting run of local behavior, the game never produces a stable behavior that grows in complexity beyond the initial information encoded in the random state. (i.e. if there is a bubble of cool stuff happening somewhere on the 2d plane, something usually interferes with it and destroys that pattern - like waves in the ocean, even when the energy curves combine to form a wave once in a while, they are limited and temporary).
So the Game of Life is actually an example that the system is limited to the information encoded in the initial starting state.
In the starting state there is either:
- a large enough random search space (i.e. a million random attempts with a 100x100 board might get something cool looking)
- intentionality (a person can design a starting state that can produce any possible stable system)
That's why the "initial condition" is so important, and why DNA is so important: without a good "start state", you get useless results—just like in the game of life.
What we are trying to find is not Conway's rules for the game of life, but this: how do we produce useful starting states (DNA) with a physical system? And more importantly, how do we create those starting states preferentially (i.e. non-randomly)?
We still need a model for how useful DNA (which corresponds to the "initial state" in the game of life) gets created. And we have no model for that right now, other than assuming unique random initial states are continually occurring and letting the law of large numbers eventually "find" winners.
While I don't think the pre-biotic problem is solved at all, we have a lot more models of how it could have happened than you seem to credit - this is after all a huge research area.
For example, here is one , and here is a whole journal issue on the subject .
I found these by searching for 'evolution of DNA' and 'evolution of RNA'.
Now, these models all include some randomness, but in no way does anyone assume "unique random initial states are continually occurring... letting the law of large numbers eventually "find" winners"
The models show plausible environments where pre-biotic synthesis of RNA (or RNA pre-cursors) can occur, and stabilise.
This model you keep bringing up - randomly selecting a molecule from all possible combinations of atoms and saying 'enough time will get you one that works' - is not mentioned anywhere that I have seen. Perhaps some lay-people (of which I am definitely one!) believe it, but as you point out it is so obviously implausible it falls down on first inspection.
There are other models (lots of them!) and they don't rely on this pure randomness.
What if amino acids and proteins are in fact likely to arise naturally and in favorable circumstances?
There was an article recently (~1-2 months) on HN about a supercomputer/AI discovering new chemical pathways for part of this process, but I can't seem to find it anymore. I think it was about forming amino acids.
I'm no expert on this subject (the opposite really, I've slept through chemistry), but my experience with large-scale simulations has been that a surprising number of them converge to the same final result given the same starting parameters even if most processes within them are perfectly random. The bigger the simulation, the more likely they are to give you stable results. And the universe is pretty damn huge.
So I like to believe the creation of the foundations of life is in fact more-or-less inevitable in our universe, in turn increasing the chance of useful proteins etc. forming.
And that kind of hand-wavey statement seems to convince most people. Universe is hella-old, and really big. Ergo, incredibly rare stuff has happened basically infinitely many times. Life everywhere, etc.
Only…it's actually not that old, we have some idea how big it is (not that big, just lots of space between atoms), and thanks to computer science, we're pretty good at analyzing issues surrounding computation complexity.
And as it turns out, the DNA-to-protein pathway is much much much less likely that our initial hand waving made it seem.
I'm not saying it didn't happen, I'm saying with our current level of knowledge we have no idea how. The math based around being old and big doesn't work. So we need better math, more studies, etc. and less hand waving.
This wasn't my argument though. In fact it was the complete opposite.
I was proposing that it was in fact likely and thus pretty much guaranteed to happen in a large universe, as opposed to being unlikely but still likely given a large enough universe.
So we're working with different assumptions here.
In fairness I put my assumption way at the beginning of my post, so it probably got forgotten about by the end of it. Quoting myself:
> What if amino acids and proteins are in fact likely to arise naturally and in favorable circumstances?
We haven't yet conclusively found all of the pathways these can arise, and we continue to discover more. People just tend to assume it's pretty unlikely. I'm not so sure.
The amount of information (via DNA) needed to create a useful protein from the 20 amino acids is absolutely incredible.
So…finding more potential (note: not demonstrated) pathways to create amino acids ex nihilo does literally nothing for producing viable DNA strands and proteins. DNA and proteins are a totally different problem, and we've made basically no progress at all, and the more we look at it, the less likely it seems.
And then people (not you per se) hand wave about the size of the Universe to explain the problem away. I think we should instead accept the problem exists and work to solve it.
Separately, we have no known examples of any natural process producing what we, as humans, would call "information." DNA is much closer to information than any other concept, to the point where if we were sent something similar to DNA from space in, say, a radio transmission, we would absolutely assume intelligent life had made that transmission.
That is, with our current knowledge, it takes something vaguely "intelligent" to product the kind of information we have in DNA. Maybe such processes exist, but this is an absolute far cry from producing amino acids from chemical precursors, which are not information-like at all (and thus, it is unsurprising that we can do it).
I brought it up on HN because relatively few people seem to know this is still a problem, and progress on resolving it has been slow.
Also, we know stuff like the smallest observed polymerase, but we don't know what the smallest functional one would be that could have into it.
We also have self-replicating pure RNA systems, though the components aren't abundant. But this is just what scientists came up with in one effort trying to make one to prove it is feasible:
Also, it's not that what I've called bad/garbage DNA doesn't produce proteins, it's that the proteins produced are useless: they don't "do" anything. There's no obvious reason why DNA "extension" should produce useful proteins over un-useful ones, at least, no mechanism that we have discovered so far.
Instead of accepting a theory of incremental improvement that "sounds nice", waving our arms about random chance and an old, vast Universe and going "yup, that's how it happened!", let's try to develop testable mechanisms and validate them.
I'm asking for more rigor while simultaneously shooting down "random chance", "plenty of time", and hand waving about the Law of Large Numbers. We've done the math and we need far more effective, directed mechanisms than random chance to produce useful DNA sequences.
This is very misleading. Proteins are defined by a minimum level of complexity, being strictly higher than peptides.
We're basically virtual machines/entities/minds stuck inside biological bodies, and the majority of us are at odds with nature and every other living organism on Earth.
I guess I'm more interested in how that happened than how life started, both seem equally incomprehensible to me, though.
Maybe humanity will be able to check the entire universe for life. I love imagining that scenario and wondering what the reaction would be when we find none.
We don't have enough information on how likely or unlikely life is to occur because we haven't even been able to replicated it yet.
It could be that life is so unlikely that it we're lucky to be here.
It rather seems that you (and the 4500-6000°C comment) were commiting to a fallacy of large numbers. You might as well write a friggin' fantastillion, unimaginable, zomg!, and you wojld still convince roughly the same gaillion number of people. But it's good to hear the details.
4000-6000 doesn't sound much at all in years for me for example, but it used to.
The problem with the drake equation is it had several variables for which we had no flippin idea what the values were. For example, supposing there are a bajillion stars (we know that much), then we have to multiply against how likely it is for those stars to have a planet - and at the time, the likelihood of planets was completely unknown.
That at least, is something that's changed in the last decade, thanks to new telescopes. We've addressed one of the Drake Equation's big unknowns: we now can hazard a guess that planets are extremely likely.
Sadly there are enough other unknowns that we still can't make any sort of conclusions, but at least the betting odds are going up.
Portable MRIs are one application, once you can make a big enough chunk of the stuff, but earlier than that, couldn't you use small pieces of superconductor in communications equipment? Power supplies, ranging from IC power regulators up to mains power transformers?
[Edit to correct the maths] = 5325785739.6251 pounds per square foot.
Looks like the highest current ever achieved is 100 kA. A mere 100 fold increase and we should be good to go.
- Maglev trains can be built at much lower cost
- Other levitating transportation via the Meissner effect
- Quantum computing
- Entertainment, theme parks
Lots of things I can think of.
Still a success, yes, but probably not a milestone, unless that discovery leads to other findings, but I do not see anything indicating such.
Or solar power transported from the south to the north of US or EU.
Superconductors are the ultimate joker in green energy.
Superconductors won't play a role in energy transport until their installation and operational costs over their lifetime are less then the installation, operational costs and losses of legacy conductors.
To be more precise, losses are exponential with regard to distance. “A typical loss for 800 kV lines is 2.6% over 800 km”, which would be close to 60% losses between antipodes.
I think that’s still plausibly worth doing, given renewable prices, but it’s not great if you can avoid it.
 that said, I know nothing about the cost of making HV power grids, and it is entirely possible the costs I am ignorant of would make it a bad idea
 but not, as autocorrupt first wrote, antipopes — cstross has nothing to do with the global electricity supply, to my knowledge
> Here we report superconductivity in a photochemically transformed carbonaceous sulfur hydride system, starting from elemental precursors, with a maximum superconducting transition temperature of 287.7 ± 1.2 kelvin (about 15 degrees Celsius) achieved at 267 ± 10 gigapascals.
I posed this to an EM friend and he estimated 1,000,000 Amp-turns would be required. Never checked his math but that current seems plausible with a good superconductor, plus it's cold on Mars!
Not polluting is something everyone in the world has to buy in to.
The first option seems much easier to organize.
So no, cleaning up isn't easier or we wouldn't be fucked right now. By the time we get to Mars as a colony we will, by necessity, have the tech to produce clean energy and will not be able to rely on oil. Thus starting fresh without polluting from the onset - something that was impossible on our own world.
There are only two ways to get things done.
1. Companies develop/supply a "product" because they can profit from it.
2. Companies are forced or subsidised by governments to supply a "product" they wouldn't otherwise be able to do for a profit.
It seems that only option 2 would be applicable here.
One thing to watch out for though, is that it's only half of the rationale behind escapism: we're concerned about our own stewardship of the Earth, but there's a very real concern that a disaster could happen to it that's not of our own making. An asteroid, for example.
Mars would protect us from several categories of these, and becoming multi-stellar would protect us from several more.
If we dont fix the problem in the core before we colonize other planets, we will become a interplanetary virus working as a parasyte and killing our host with time.
Mars will build up to sustainability, while on earth we try to cut back to sustainability.
It's the only place we can inhabit, so we should focus on protecting it.
I love everything about space exploration, but I'm not naive enough to believe its the solution to our problems here on earth. One might argue its a distraction from our ongoing global humanitarian crisis.
People everyday are dying from lack of food, water, shelter, etc...
What time & money is spent on solving the galaxies mysteries, could be brain power backed capital used to solve our dire terrestrial affairs. IMHO...
Food for thought...
There are 7 billion people on earth. That gives us a bit of leeway to multitask. We can have activists and rocket scientists solving different problems.
This is often repeated but makes no sense at all. The time and resources humanity as a group spends on those activities corresponds to 0.1% of our output. Infinitely more is wasted on mundane stuff like manufacturing cars, golf carts, office jobs or reading online forums.
I suggest you read this: https://lettersofnote.com/2012/08/06/why-explore-space/
> "In 1970, a Zambia-based nun named Sister Mary Jucunda wrote to Dr. Ernst Stuhlinger, then-associate director of science at NASA’s Marshall Space Flight Center ... Specifically, she asked how he could suggest spending billions of dollars on such a project at a time when so many children were starving on Earth. Stuhlinger soon sent the following letter of explanation ... later published by NASA, and titled, “Why Explore Space?”"
"learn to help each other solve problems"
The most important problem we need to solve, is how to survive in the universe, where any large rock falling from the sky can wipe out our civilization, if not the whole mammalian branch.
We don't have to abandon efforts to improve human life on this planet while trying to expand to more than one.
Only in the sense that a climbing harness is also a single point of failure.
I'm trying to imagine how these extreme pressures would modify bond angles, nuclei spacing, and constraints on motion. And also tring to understand how that's affecting the behavior and creation of the Cooper pairs.
First concept, virtual particles vs real particles. When we talk about "an electron flowing through metal" it is not actually a single electron. As it moves, the electron will move into an atom, another gets knocked out. But in aggregate it "acts like" a single particle with possibly different properties from a real electron. For example it likely has a different mass. A virtual photon will travel slower than a real one. And so on.
Virtual particles can even correspond to things that aren't particles at all! For example sound is a wave, and quantum mechanically is carried by virtual particles known as phonons. These act exactly like any other particle, even though they are actually aggregate behavior of lots of other things!
A Cooper pair is a pair of things (eg electrons) that are interacting enough that they have a lower energy together than they would apart. Electrons are fermions, with half spin. They have a variety of properties, such as the Fermi exclusion principle. A bound pair of electrons becomes a virtual particle with an integer spin. Which makes it a boson, which behaves differently.
Superconductivity happens when charge is carried by bosons.
In high temperature superconductors, it looks like the electrons are at least partially bound by interaction with phonons. The high pressures change the speed of sound, and therefore change how easily Cooper pairs form.
However https://phys.org/news/2019-04-mechanism-high-temperature-sup... claims that there is now a theoretical explanation for high temperature superconductors, and the best guess above doesn't seem to be the real explanation. The real explanation being that the feature/TIQ-7651_unique_schema_version
Remember what I said about particles having a different mass moving through materials? The binding together of electrons through interaction with phonons seems to depend on the mass of the electrons. When you squeeze the lattice, that mass decreases.
This is the Pauli exclusion principle, in case someone wants to learn more on the subject.
Interesting. Do we know if it possible to disrupt superconductivity with sound at just the right frequency? And the converse, has anyone tried to enhance superconductivity by using sound (i.e. increase either the critical temperature, increase the current density, etc)?
Besides this new example with superconductivity, there are other more familiar phase transitions with the same behavior.
For example, with most liquids, in order to solidify them you may either cool them or compress them.
The same if you want to liquefy gases, either cooling or compressing has the same effect.
Room-temperature superconductivity at very high pressures has been predicted many years ago, but it is very nice to have an experimental confirmation.
The other knob you can use to change the vibrations is the mass of the balls. This can be done by using different isotopes of the same element and the critical temperature goes down with mass.
I don't quite remember my intro to electrical components, though it's a quick read for the basics. The GP obviously knows about atom models and band gap.
The paradox bit is that, as far as I can tell pressure is roughly equivalent to heat, and heat equals decreased intrinsic conductivity. But if I imagine that high preasure restricts the absolute motion of particles, that would equal decreased resistance (like an idealized fixed suspension for your swing, that doesn't take energy out of the system).
Since Hydrogen is involved, I suppose there's a channel of Hydrogen rumps without any electrons, and the high preassure is needed to keep the hydrogen from moving apart and recombining outside the ensemble. Surely this involves some form of entanglement? Which I imagine as a kind of clockwork, all cores spinning in unison.
Haha, I have no idea what I'm talking about.
The equal charges participate on the problem, but do not stop the electrons from pairing up. There is a lot of virtual particle exchange between them, but that's how forces happen. It's more correct to say that the crystal mechanically constrains the electrons into pairs than that the electrons pair with virtual particles.
(IANAP, but this one topic I have studies a little.)
Now a superconductor is just a conduit for electrons that doesn't generate heat. We know from Landauer's principle that heat is only generated when you destroy information. If I take a pair of entangled electrons, those electrons contain exactly one bit of information (in the von neumann sense). If I cannot add energy in excess of the energy required to disentangle them, then that bit of information is never destroyed.
Whether or not a given interaction between the electron pair and the substrate has enough energy to disentangle them is not a function of temperature, it is a function of the actual energy that may be imparted to my pair. Which is proportional to the actual heat in my material, rather than its temperature.
Of course to create that magnetic field, you have to have superconductors very close to this superheated plasma. So the first thing to relate to this is there may be less cooling required.
The second thing is, and this is both a stretch AND possible a huge gain, but perhaps the required pressure for the superconduction can be provided by the inherent pressure of the fusion reactor core.
The idea would be to create a superconductor with a pressure/temp curve that is amenable to the pressure/temp curve of the starting sequence of a fusion reactor.
It is really weird they wrote about 'atmospheres' too...
> Over the course of their research, the team busted many dozens of $3,000 diamond pairs. “That’s the biggest problem with our research, the diamond budget,”
> “It’s clearly a landmark,” said Chris Pickard, a materials scientist at the University of Cambridge. “That’s a chilly room, maybe a British Victorian cottage,” he said of the 59-degree temperature.
Maybe that's why the acronym is what it is. I wonder what GLAM's diamond budget is.
Contents are completely enclosed by diamond to hold the pressure. No way to probe otherwise as far as i know.
Here is an example at an ESRF beamline https://www.esrf.eu/home/UsersAndScience/Experiments/MEx/ID1...
Pretty funny if you ask me.
That being said, lab diamonds are not necessarily that cheap, depending on the dimensions and qualities necessary.
It's so annoying to read science articles about how X will revolutionize Y, but you have to dig through the comments section to find out why it won't work.
Most new research findings only have very specific applications. It's only groundbreaking when something can (eventually) be implemented in real life for a reasonable cost.
And yes, I say misleading. Technically true but misleading, because their omission is absolutely critical to the nature of their breakthrough and as you implied, anyone who knows the first thing about room temperature superconductors will want to know if the material has a drawback stopping it from functioning outside of a strictly lab setting.
Engineering a wire under pressure whose whole length is compressed by a structure made out of carbon nanotubes is clearly difficult, but seems theoretically possible. It is very likely beyond our current engineering capabilities. But in principle it is a technology that we could try to develop.
For example at low temperature you assemble a wire that has a high thermal coefficient of expansion down the center of the wire. Then the superconductor around that in a ring. Then a carbon nanotube sheathe around that which traps things. Then when it warms up the core squeezes the superconductor against the sheathe and you get the pressure.
When you hit one end the wire with a hammer, or drop something on it, the entire wire might explode.
The whole excitement about room temperature superconductors is getting rid of the difficulty of cooling, the difficulty of this pressure requirement is easily much worse.
Then you make it feasible.
And then you make it practical.
One step at a time.
Though I think we could go a long way with the liquid nitrogen temp superconductors now on the market. It's still going to be a real chore to design around, but it's got to be a lot easier to deal with liquid N2 than liquid He.
Similar to a CS paper showing a new algorithm e.g. sorts with x% less swaps than quicksort, it might not actually lead to a performance increase on real hardware.
For example, it states preposterous sentences such as this:
"The fact that the fine structure constant can be expressed as a function of (2e) shows how important the notion of electron pairing is in the composition of the Universe, and gives credence to the theory that the fundamental cosmic meta-structure may be thought of as a charged superfluid, in other words, a superconducting condensate."
This guy is a scammer who was able to bamboozle his patent attorneys, presumably he gets some incentive to publish patents?
He didn’t check before it was filled because he didn’t care (the point of the patent was “we needed a patent protected system to be granted a license to a codec”), and it was granted anyway.
1) Taking a wire, and mechanically inducing a wave of lattice vibrations
2) Firing a pulse of electricity down the wire to "ride the wave" of superconductivity produced
whereas normal superconductors (including this one) produce superconductivity because the first positively charged electrons in a wave of current "pulls up" the negatively charged lattice behind it as it passes over creating a wave that attracts the second wave of electrons traveling behind (which in turn does the same to the third.)
(1) Please read the terms of the data sheet carefully and consult your local engineer before using. Certain restrictions may apply. 0 ohm not available in all jurisdictions.
ii. In schematics != in reality
c. Lots of things are within a useful margin of error of $impossible_standard
δ. This entire post is about superconductors
Looks like unnouinceput is just in the wrong jurisdiction. :)
> Features: Anti-sulfur, ...
Imagine the shipping warning labels.
For example, if you had a rod of the superconducting material 1mm in diameter, and you wrapped it tightly with a strand of Kevlar (tensile strength 3.6GPa) until it became 100mm diameter, then the center would have a suitable pressure...
If that's the case, you just set the tension in the thread so the Kevlar is near its breaking point, and start winding, like winding thread onto a bobbin...
Shouldn't "room temperature" include in its definition about one atmosphere of pressure?
I can tell you that I wouldn't want to experience "room temperature" 30,000 km above sea level.
(Of course it's hilariously not "S" at all, because no one can agree whether the "T" is 0 deg C or 25 deg C or somewhere in between. But when we're talking about room-temperature superconductors that's not really a big deal.)
Room temperature means room temperature. Room temperature and 1 atmosphere of pressure means room temperature and 1 atmosphere of pressure.
hasn't this pretty much always been the crux - to fixate the particles?
while it's a nice achievement experimentally-speaking, is it really that surprising that materials immobilized by enormous pressures but at elevated temperatures exhibit the same behavior as if they were immobilized via chilling to near 0K? either condition is impractical to attain outside of a specialized lab.
Second question, do virtual particles have the same Casimir effects in this apparatus, as we would see in low pressure experiments? If you're interested, also checkout the results published recently on measuring the Casimir force. Reference: “Casimir spring and dilution in macroscopic cavity optomechanics” by J. M. Pate, M. Goryachev, R. Y. Chiao, J. E. Sharping and M. E. Tobar, 3 August 2020, Nature Physics.
I thought we had reached this record with solid hydrogen. But I cannot find this online anywhere. The material this article goes over is hydrogen-carbon-sulfide. The previous record for superconductivity was using hydrogen-sulfide.
I wonder what other materials can be added to lower pressure at room temperature and maintain superconductivity. Lithium? Nickel? Copper?
However, metallic hydrogen is pretty much theoretical at that point, although it looks like we are getting closer. No other material that I know of had achieved >0°C superconductivity before.
It's so exciting because there's no obvious limitation to why it wouldn't work. If superconductors improve at current rates we'll just end up there in 30-40 years naturally.
To think in another generation humans might have cheap limitless power, it's tantalizing
> At least not in this universe.
Good one. Did they take it down?
Edit: looks to be back now.