Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What scientific phenomenon do you wish someone would explain better?
607 points by qqqqquinnnnn 38 days ago | hide | past | web | favorite | 813 comments
I've been studying viruses lately, and have found that the line between virus/exosome/self is much more blurry than I realized. But, given the niche interest in the subject, most articles are not written with an overview in mind.

What sorts of topics make you feel this way?




Quantum Computers. Not like I'm five, but like I'm a software engineer who has a pretty decent understanding of how a classical turing machine works. I can't tell you how many times I've heard someone say "qubits are like bits except they don't have to be just 1 or 0" without providing any coherent explanation of how that's useful. I've also heard that they can try every possible solution to a problem. What I don't understand is how a programmer is supposed to determine the correct solution when their computer is out in some crazy multiverse. I guess what I want is some pseudo code for quantum software.


I recommend Computerphile's videos https://www.youtube.com/playlist?list=PLzg3FkRs7fcRJLgCmpy3o....

I had the same "problem" as you. What finally made me feel I sort of cracked it was those videos. The way I think of it now is: They let you do matrix multiplication. The internal state of the computer is the matrix, and the input is a vector, where each element is represented by a qubit. The elements can have any value 0 to 1, but in the output vector of the multiplication, they are collapsed into 0 or 1. You then run it many times to get statistical data on the output to be able to pinpoint the output values more closely.


I don't know if it's accurate (because I never understand anything I read about it) but this is the most concise and clear explanation I've read on this subject to date. Thank you!


It's accurate. QC is just linear algebra with complex numbers, plus probability extended to the complex domain. Why that's useful is something I'm still struggling with as well.


I'd assume it's the speed advantage, but the only problem I can think of that would require that type of exponential speed is cracking hashing algorithms which just seems destructive and counterproductive, like building a digital nuclear bomb - and from my very limited understanding that's a long ways off from being achieved.

I assume there's probably many more complex computational problems outside of my domain that QC can help with. Does anybody know of any?


Aside from Shor's, the other is Grover's algorithm which deals with search in an unstructured database. There are more and more superpolynomial speedups which have been discovered in application of QC. A good enumeration of these is the quantum algorithm zoo.

https://quantumalgorithmzoo.org/


Grover's algorithm lets you search an unsorted list of four elements with just one "is this thing in the list?" query. Classically, of course, it requires four queries.

More precisely, given f: 2^n -> {0,1} which is guaranteed to hit 1 exactly once, Grover finds the one input which hits 1, and it does so using about 2^{n/2} queries of f; but the constants happen to line up so that when n=2, exactly one query is required.


Many problems require lots of huge matrix multiplications—think simulations of physical systems, i.e. weather systems or molecular interactions, or numerical optimization.


Additionally, many problems are can be converted to matrix operations, like graph algorithms.

Note that matrix multiplication takes O(n^2) time with a quantum computer, but O(n^2.807) time on a classical computer.


> but O(n^2.807) time on a classical computer

Optimizing matrix multiplication for classical computers is an open research problem, and according to wikipedia there are algorithms with O(n^2.37) running time. Also according to wikipedia, it is not proven that matrix multiplication can't be done in O(n^2).


Ah yes, we may as well add NP-completeness to this thread.


I think there's no way to understand quantum computing without first understanding some linear algebra, specifically tensor products. How ten 2-dimensional spaces give rise to a 1024-dimensional space, how the Kronecker product of three 2x2 matrices is an 8x8 matrix, and so on. If you're comfortable with that, here's a simple and precise explanation of quantum computing:

1) The state of an n-qubit system is a 2^n dimensional vector of length 1. You can assume that all coordinates are real numbers, because going to complex numbers doesn't give more computational power.

2) You can initialize the vector by taking an n-bit string, interpreting it as a number k, and setting the k'th coordinate of the vector to 1 and the rest to 0.

3) You cannot read from the vector, but exactly once (destroying the vector in the process) you can use it to obtain an n-bit string. For all k, the probability of getting a string that encodes k is the square of the k'th coordinate of the vector. Since the vector has length 1, all probabilities sum to 1.

4) Between the write and the read, you can apply certain orthogonal matrices to the vector. Namely, if we interpret the 2^n dimensional space as a tensor product of n 2-dimensional spaces, then we'll count as an O(1) operation any orthogonal matrix that acts nontrivially on only O(1) of those spaces, and identity on the rest. (This is analogous to classical operations that act nontrivially on only a few bits, and identity on the rest.)

The computational power comes from the huge size of matrices described in (4). For example, if a matrix acts nontrivially on one space in the tensor product and as identity on nine others, then mathematically it's a 1024x1024 matrix consisting of 512 identical 2x2 blocks - but physically it's a simple device acting on one qubit in constant time and not even touching the other nine.


Thank you for posting such a concise description.


What about some kind of interactive simulation, kind of like playing with a graphing calculator? People tend to relate to things better by tinkering and playing with parameters to observe the impact on results. Analog experimentation is how we learned most Newtonian physics.


Several years ago, I gave a presentation on quantum computing to the Los Angeles Hacker News Meetup. The slides are at https://jimgarrison.org/quantumcomputingexplained/ . Unfortunately, there is no video recording so they are currently lacking explanations.

My goal was to explain quantum computing in a way that is mathematically precise but doesn't require one to learn linear algebra first. To do this, I implemented a quantum computer simulator in Javascript that runs in the web browser. Conceptually (in mathematical language), in each simulation I present, I've started by enumerating the computational basis of the Hilbert space (all possible states the qubits could be in) and represented the computational state by putting an arrow beside each of them, which really is a complex number. (This similar to how Feynman explains things in his book QED.) The magnitude of the complex number is the length of the arrow, and its phase is the direction it points (encoded redundantly by its color). I've filled out the amplitude symbol with a square so that at any given point, its probability of a measurement resulting in that outcome is proportional to the area of that square. Essentially, in this language, making a measurement makes the experimenter color blind -- only the relative areas of the amplitudes matter and there is no way to learn directly phase information without doing a different experiment.

I could make a further document explaining along these lines if people are interested. The source is on github too: https://github.com/garrison/jsqis


Strilanc, who works on the google quantum team, has a simulator here: https://algassert.com/quirk


https://quantumjavascript.app/

You might find this useful. Along with the author's write-up:

https://medium.com/@stew_rtsmith/quantum-javascript-d1effb84...


IBM has an amazing tool for this. Not only do they have a great simulator, but you can enter a queue to run your program on their real quantum computer:

https://quantum-computing.ibm.com/


Excellent! Thanks so much for this!


Since no one has listed it yet, please check out https://quantum.country/

It's by Andy Matuschak and Michael Nielsen, and it is excellent. Have fun!


Yes! How come this isn't higher up in the list? This is one of the best pieces of education I've ever seen. Absolutely wonderful.


This deserves to be upvoted all the way to the top!


upvoted


Very simplified explanation:

If you understand Turing Machines, you probably also understand other automata. So you probably understand nondeterministic automata [1].

A quantum computer is like a very restricted nondeterministic automaton, except that the "do several things a once" is implemented in physics. That means just like a NFA can be exponential faster than a DFA, a QC can be exponential faster than a normal computer. But the restriction on QCs makes that a lot harder to do, and so far it only works for some algorithms.

As to why quantum physics allows some kind of nondeterminism: If you look at particles as waves, instead of a single location you get a probability function that tells you "where the particle is". So a particule can be "in several places at once". In the same way a qbit can have "several states at once".

> What I don't understand is how a programmer is supposed to determine the correct solution when their computer is out in some crazy multiverse.

Because one way to explain quantum physics is to say that the waveform can "collapse" [2] and produce a single result, as least as far as the observers are concerned. There are other interpretations of this effect, and this effect is what makes quantum physics counterintuitive and hard to understand.

[1] https://en.wikipedia.org/wiki/Nondeterministic_finite_automa...

[2] https://en.wikipedia.org/wiki/Wave_function_collapse


The thing I don't understand about qc is how on earth can you read values from qbits without breaking superposition.


You can't, it's a fundamental aspect of quantum mechanics (measuring any entangled system collapses it because you've forced the system into a state by measuring it).

The idea is that you structure the QC system such that the computation is done using entangled states, but when it comes to measuring the qubits (to get the result of the computation) the state is such that you'll get meaningful results. This means the quantum state at the end of the calculation would ideally be along whatever axes you're measuring, so you get the same answer 100% of the time.


OK but this implies that you'll have to know beforehand what a result will look like. Which kind of beats the purpose of a general purpose computational device.


No it doesn't. You know that the result of the computation for an individual qubit will be either 0 or 1 (otherwise it would be useless -- measuring only gives you one bit of information), so you construct the system such that after the computation is done each qubit will be aligned with the |+z> or whatever axis. The key point is that you have to be clever about how you construct the system for a given QC algorithm, not that you cannot do arbitrary computations using the system.


OK but we're back on square one. If you can't read info from qbits without breaking the state of the whole freaking system then what exactly is that you reading? Doesn't the alignment info collapses superposition?


You do "break the state of the whole freaking system". Once you've read the output, you're done. You have to set up your initial state again and run the computation from scratch.


You design the algorithm so that it collapses the state into the right result.


You read the 'probability the qbit is one' by running multiple times and doing statistics.

I found an explanation of schor's algorithm by my colleagues quite helpful. In my experience math seems to be more useful here than computing science.


Here is a video of a researcher at Microsoft Quantum lecturing on this: https://www.youtube.com/watch?v=F_Riqjdh2oM "Quantum Computing for Computer Scientists.".

However, even with understanding how a Quantum Computer works at its most basic level I still have difficulty understanding the more useful Quantum Algorithms:

https://en.wikipedia.org/wiki/Shor%27s_algorithm

https://en.wikipedia.org/wiki/Grover%27s_algorithm


I honestly recommend the following: 1. Pick up the standard textbook by Nielsen and Chuang, Quantum computation and quantum information. read the first two to three chapters. 2. Solve the exercises for the Q# programming language.


I found Quantum Computing Without The Physics epicly helpful. https://arxiv.org/abs/1708.03684

It explains in terms a computer scientist can understand. As in: it sets out a computational model and explores it, regardless whether we can physically realize that machine.

Hope this helps!


Not sure if it has helped anyone else, but it did me!


IBM's Quantum Experience is great for this. It walks you through the quantum gates you can use and lets you write small programs for quantum processors.

I've found that to be the clearest way of understanding what qcs do.


There’s an O’Reilly book on Programing Quantum Computers that explains it really well. Bonus: it was written by Batman.


Pseudo-code for quantum computers is currently linear algebra. Fortunately, most programmers have the required linear algebra to get a thorough understanding of the basics! Check this out https://quantum.country/qcvc. Fair warning, I did have to brush up on my linear algebra a bit, but it's worth it imo. Friends in the know say that when you understand this article, you understand quantum computers.


I had an aha moment with quantum computers a few months ago when reading an article that explained it as probability distributions. I don't think I have the complete understanding in my mind anymore and I wish I had saved the article, but looking into how quantum computers essentially serve as probability distribution crunching machines might help with your understanding.


So can they still do traditional deterministic(?) calculations? Or would that be somewhat akin to using machine learning to do your taxes; possible but just overkill?

I've often heard it said that Quantum Computers can crack cryptographic keys by trying all the possible inputs for a hashing algorithm or something handwavey like that. Are they just spitting out "probable" solutions then? Do you still have to try a handful of the solutions manually just to see which one works?


"Trying all possible solutions" is generally a bad metaphor for quantum computing and will confuse you. (Its more like you start out with all answers being equal probability, and get the wrong answers to somehow cancel each other out making the "right" answer have a high probability)

I am not a quantum person but i once saw a geometric explanation for grover's algorithm which kind of made it all make sense to me. (grover's algorithm is the quantum algo you use for problems where you dont know any approach better than brute force. It can bruteforce stuff in O(sqrt(n)) guesses instead of O(n) like a normal computer). Basically, the geometric version was that you start with all possibilities being of equal probability (i.e. an even superposition of all possible states), negate the amplitude of the correct answer, then reflect the amplitudes around a line that is the mean of the amplitudes (do that sqrt(n)) times. The end result is the correct answer has a higher probability than the other answers. I unfortunately can't find the thing where i originally saw this, but they visualized it basically as a bar graph (of the amplitudes of possible states) and it seemed much clearer to me than other explanations i have come across


Here's what Scott Aaronson says in regard to "trying all the possible inputs": https://news.ycombinator.com/item?id=17425474


Yeah, they can do deterministic calculations. You just avoid ever putting the quibits in a state where measurement gives probabilistic results. it would, however, be a ridiculous use of technology, like using a Neural net to simulate an XOR


Like probability distributions, but they don't just sum when you combine them, they interfere (probability is the square of amplitude, which can be negative).

Quantum computing is all about finding ways to hack the interference process to compute more than you otherwise would have.


you're in luck my friend. Perl has had a quantum computing module since the late 90s:

https://metacpan.org/pod/Quantum::Superpositions

As far as I can tell this one still outperforms all existing "hardware implementations".


The first 'meetup' I went to some 20 years ago was on this module!


Nobody understands quantum mechanics. Lots of people know how to apply it. I don't think quantum technology is going to go anywhere until physicists revamp the theory & the pedagogy in a way that makes it comprehensible.

I say this as someone who passed 2 semesters of graduate QM.


> I say this as someone who passed 2 semesters of graduate QM.

THat's funny because my EE math concentration was on advanced calculus. I took two semesters of a-calc and got A's, but I only know how to compute a Jacobian and apply it, not its origin story. It's a very weird feeling to understand the motions but not the ... depth?


I wrote a basic response, but it got longer than I thought it would and HN complained about it being too long, so here's a pastebin: https://pastebin.com/zTJA4bJh


This podcast episode has an amazing explanation by one of the top researchers in the field: https://www.youtube.com/watch?v=uX5t8EivCaM

The basic idea is that by making the amplitudes of the qubits destructively interfere with each other in certain ways, you can eliminate all of the wrong answers to the question you're trying to answer.


not quantum computers per se but I had my aha moment when I saw quantum computing as "linear algebra with probability coefficients". It's mostly working on superpositioned qubits while doing the calculation and then making them (with linear transformations) as likely as possible to collapse on the solution (when "measured").


I looked briefly at QCs and I understood them as kind of a machine that doesn't try every possible solution. It is in every possible solution, and then you can somehow "lock it" in the state that is the solution to your problem. Kind of like NP problems are hard to find a solution but easy to verify a solution.


I think because it has less applications for traditional "software" and more applications for efficiency of embedded systems.

But it could have some bleeding edge new applications from the TCP/IP space for urgent point, new methods for cryptography, or speeding up algorithms for searching. ¯\_(ツ)_/¯


Im not an expert on quantum computers but I'm not aware of any applications in the embedded space.

Generally quantum computers are good for three things

* factoring numbers (and other highly related order-finding problems). RIP RSA, but not that applicable outside of crypto.

* unstructured search (brute forcing a problem in only O(sqrt(n)) gueses instead of an average of n/2 gueses). Certainly useful...but its not a big enough speedup to be earth shattering.

* simulating various quantum systems (so scientists can do experiments easier). Probably by far the most useful/practical application in the near/medium term.

There's not a whole lot else they are good for (that we know of, yet)


Qubit is physical random generator, which quickly oscillates between 0 and 1. It does that so quickly, so scientists must operate with probabilities to perform calculations. They create analog quantum computers, where right solutions are much more probable than wrong ones, then let system to figure them out, then sample solutions periodically.


In short I'd say you need to understand the underlying mathematics to intuitively understand the operations that underpinn the algorithms. And since this is quantum mechanics...there's no real ELI5 version that can give you any useful understanding.


There is a comic (maybe SMBC) that covers the whole commonly false belief surrounding qubits.


https://www.smbc-comics.com/comic/the-talk-3

The basic gist I get is that quantum computing, for a very specific set of problems, like optimization, let's you search the space more efficiently. With quantum mechanics you can associate computations with positive or negative probability amplitudes. With the right design, you cause multiple paths to incorrect answers to have opposite amplitudes, so that interference causes them to cancel out and not actually happen to begin with. That's just my reading of the comic over and over though.


Along those same lines, I once heard a description that went something like: "Imagine a Jenga tower so precisely balanced that, when it falls over, it spells out the answer."


I found the comic not very good either. Kid suddenly blurts "Wait a minute, that means a qubit corresponds to a unit vector in 2 dimensional Hilbert space" Yeah.


I think you have to take that sarcastically like when she mentions that it's not like the classical probability taught in preschool.


Not quite what I meant. I think the comic is excellent. In case its not painfully obvious, the joke is equating lay people's poor understanding of quantum mechanics to a child's misunderstanding of sex. It's not trying to dumb down the concept of quantum computers at all, it's just trying to point out how incorrect how dumb pop science simplifications of it are. The 'qubits are bits with a range of value between 0 and 1 that use quantum magic to test all possibilities at once' is utter fiction.


Well my point is Gp is asking for good material for explaining the subject, this comic is not it. I found it only irritating Tbh.


Short answer: there isn't an easy answer. Yet. (Give QC another 50 years).

Proof? Just look at all the replies you got: each one is dozens of pages of complex (imaginary) math, control theory, and statistics.

The hardest part of QC is exactly what you described: how to extract the answer. There is no algorithm, per se. You build the system to solve the problem.

This is why QC is not a general purpose strategy: a quantum computer won't run Ubuntu, but it will be one superfast prime factoring coprocessor, for example (or pathfinder, or root solver). You literally have to build an entire machine to solve just one problem, like factoring.

Look at Shor's algorithm: it has a classical algorithm and then a QC "coprocessor" part (think of that like an FPU looking up a transcendental from a ROM: it appears the FPU is computing sin(), but it is not, it is doing a lookup... just an analogy). The entire QC side is custom built just to do this one task:

https://en.wikipedia.org/wiki/Shor%27s_algorithm

In this example he factors 15 into 5x3, and the QC part requires FFTs and Tensor math. Oy!

Like I said, it will take decades for this to become easier to explain.

For fun, look at the gates we're dealing with, like "square root of not": https://en.wikipedia.org/wiki/Quantum_logic_gate


This feels like the high-level missing piece in my understanding of its use. Do you know any resources that expand on QC’s effective potential more from this point of view?


IMO one of the effective potential of a QC is `Secure Encrypted communications`. There is a research project named QUESS (https://en.wikipedia.org/wiki/Quantum_Experiments_at_Space_S...).

This project involves a minisatellite (capable of generating entangled photons in space) to establish a space platform with long-distance satellite and ground quantum channel, and to carry out a series of tests about fundamental quantum principles and protocols in space-based large scale


Sorry, I do not.


All good. I appreciate the perspective you’ve given!


Macroeconomics. Central banks are "creating" a trillion here, a trillion there, like nobody's business. But what are the consequences? What is the thought process that central bankers have gone through to make these decisions?

Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..) and why not others (e.g. stocks or put money into startups)? And then, what happens if a debtor pays back its debt? Is that money consequently getting "erased" again (just like it's been created)? What happens if a debtor defaults on its debt? Does that money then just stay in the economy, impossible to drain out? What is the general expectation of the central banks? What percentage of the debt is expected to default and how much is expected to be paid back?

And specifically in the case of central banks buying govt. debt: Are central banks considered "easier" creditors than the public? What would happen if a country defaults on a loan given by a central bank? Would the central bank then go ahead and seize and liquidate assets of the country under a bankruptcy procedure to pay off the debt (like it would be standard procedure for individuals and companies)?


The consequences are supposed to be inflation. Instead, we seem to only get asset inflation (houses, stocks) and not consumer goods inflation (flour). From a former econ prof who I spoke to recently: the central bankers aren't thinking about inequality or inflation right now, they're just trying to avoid the apocalypse.

The liquidity injected is supposed to be taken out later, thus removing the inflationary distortion. Whether it will or not is anyone's guess. 2008's injections have yet to be taken out.

Central banks are easier creditors because, while autonomous, they are the same country as the government! So it's technically like owing yourself money. A central bank that cooperates with the debtor country (itself) would never force a default, and is thus never an acute problem. Of course, infinite money printing should lead to dangerous inflation.


> the central bankers aren't thinking about inequality or inflation right now, they're just trying to avoid the apocalypse.

The bad news is that by ignoring inequality, they may be just causing it.


> Instead, we seem to only get asset inflation (houses, stocks) and not consumer goods inflation (flour).

That doesn't sound surprising when all that injected money goes directly to banks instead of individuals.


I have done plenty of serious reading on economics but as far as an approachable start, I don't think we can do better than this: https://economixcomix.com/

If you or a friend wants a crash course on econ, check it out.


One approach is reductio ad absurdum:

If Central Banks can create money without negative effects, then

- why tax people?

- why even work? Can't we just print enough money for everyone and live happily ever after?

I realize these questions are quite provocative and their answering only explains if it will work but not how or when it will fail.


> - why tax people?

Printing money is actually more or less equivalent to a tax, because it reduces the value of the existing money supply.

> - Can't we just print enough money for everyone and live happily ever after?

No, because printing money redistributes wealth, it doesn't create it.


Not just any tax, but a wealth tax affecting anybody who has US dollars, regardless of their citizenship or location.


That's a really good point and I never thought about it this way. It just makes me think how powerful is for US the world relying on dollars as an international currency.


Yes! Exactly. This is what I tried to explain in another comment.


The gold standard ended in 1975. Since then, we have "virtual money", which are effectively made out of thin air at the time a bank borrows them to someone. Once they started this system there is probably no way back. But what effects has that done, are them all positive? I don't know. Some say it helps prevent the situation when rich would buy all the gold supply - banks can make more money for others. But - rich people can use stock markets and funds too. And look at one thing - the Appollo program started 1961 and completed 1975. I was quite shocked to see from the documentaries what technology they developed and had during the Apollo era. What US built during cold war. UNIX development started in 1970, right? So many awesome airplanes and technology around. It looks like what we have today is mosty (a great) improvement of many inventions developed before or around the end of Apollo and the end of gold standard. Now, that money is virtual, it looks like there won't be any major stock market crashes as they can intervene relatively easily. But why the space program basically died? NASA has just been talking about SLS for many years... Is it because politicians or bankers don't want that to continue? They could print money for it in minutes, couldn't they? Or would it be far more expensive than we think? Would printing so much money skyrocket the inflaction and bring the whole economy to its knees? Isn't by chance the "real value of work" now much higher than what inflation suggests? I read here about a year ago that some smart people are not working in scientific R&D but instead making AI to improve marketing for online e-shops selling products people don't really need. Despite many startups claiming breakthroughs daily in the news, I don't feel it that way. Those are small pieces one by one. It looks to me big human progress has stagnated somehow. Maybe the smart people are not motivated enough to do the real big things, or maybe it is because there is no need to compete with Russia anymore. But hey, there have been talks about helicopter money recently - giving people money for no reason. Does that look like a great way to improve things? We live in peace - thanks for that - but it seems to me that somehow there is no motivation to do big things anymore. It seems better to just pour the money in the stock market, consume goods and services, watch Netflix and play games. Consume gas and limited Earth resources with virtual unlimited paper money. Taxation probably doesn't make sense anymore, it is there just for historical reasons, from the time bank notes were backed by gold.


I agree this is a good starting point for understanding the financial system! For anyone with a programming background I recommend thinking of the whole financial/economic/political system as an algorithm for distributed decision making: who does what, what physical resources get committed to what projects, etc. You can then start to figure out how each component (money, banks, central banks, government, stock market, derivatives market, limited liability, interest, inflation, etc) serves to guide decisions.

The main purpose of a central bank imo is to keep money creation at arms length from government, so a rubbish government can't fiddle with the financial system too much.


- why tax people?

So that common services (eg. healthcare in Canada, education everywhere, roads) can be collectively paid for.

- why even work? Can't we just print enough money for everyone and live happily ever after?

Because there wouldn't be enough resources. (Fiat) currency is just a medium of exchange for real stuff. Instead of growing wheat and trading it for your wooden furniture, the state provides a medium of exchange so we can both just transact in money. eg. when I don't need more furniture but you still need wheat.

Money =/= wealth


Another thing to realise, along these lines, is that economic theories try to explain the world's economy, not dictate it.

I would say that some mathematicians went into economics to win Nobel prizes (which they did win) and I guess they would probably be quick to point this out at well.


Economics is a branch of moral philosophy and very consciously uses persuasive rhetoric to dictate policy. One of the tools used is hand-waving hidden inside differential equations to produce models that would be useless in any other discipline.

I don't think I've ever seen a mainstream economic prediction that was actually correct.

It's not hard to understand why. Reducing all the sectors of a complex economy to crayon-drawn measures like "inflation" and "unemployment" - which aren't even measured with any consistency - is like trying to predict the weather in the Bay Area using a single weather station for the entire continental US, which is conveniently located on the wall of a cow shed in Kansas.


There are two branches of economics. One of them is a science, the other is (politically, economically, and culturally) successful.


It's possible that they can create money up to a point with positive effects and beyond that point the effects are negative.


America Inc, which dated, by I believe Mary Meeker has one view showing USA on a "GAAP" basis. It's not pretty. Leverage is too high. Profit margins are negative. "growth" (in this case government receipts) underperforms entitlement spend. Interest is some non-material portion of revenue such as 15% to 20% (caveat, even the most highly leveraged private equity bought companies don't spend more than 25% of revenue of amortization and interest, generally speaking).

A better but harder resource is I believe by Piketty (the inequality guy): from balance sheet recessions there's usually only a few ways out. He and his co-authors go through every single knkwn recession in every single country (obviously biased towards recent and Western). What I took away from it is that without population growth the US needs hyperinflation, to default on its debt, or to increase tax revenue's sharply and/or decrease entitlements sharpy. It's up to you to guess which one of those three is most likely.

But the budgetary situation is not tenable.


- Consequences of creating a trillion? Economic policy is largely driven by political necessity. During the Great Depression, FDR's policy was based on Keynesian economics. Keynes aside, FDR tried everything to alleviate the suffering of the people, because the danger was not in implementing the wrong policy, instead the danger was in doing nothing. Later, the "Austrian" school of Monetary theory was popular where it prescribed increasing the money supply to pump up a troubled economy. During the last financial crisis, Obama's administration prescribed "Quantitative Easing" where the government bought the junk to keep the big banks solvent. The big banks were "too big to fail" and had to be rescued. The consensus of the last ten years is that "Quantitative Easing" restored the financial health of the elite and the corporations, but the middle class is still left behind.

- Assets that they are buy? The idea is to keep the banking system solvent and to prevent a domino effect where the liquidation of one big bank will result in a run on other banks. The big banks got into trouble because they took depositors money and invested in junk which went belly-up. The federal government insures everyone's bank deposits; if the banks went bellied up the FDIC would have to pay out. Better that the banks stay solvent.

- There are cases like Greece defaulting on its international loans. The EU forced Greece to agree to an austerity plan lowering Greece's payments if Greece changed its national spending which is deeply unpopular in EU and Greece. But there is no other alternative. Well the alternative is what happened to Weimar Germany after WWI: hyperinflation, economic destruction, and the longing for a savior.


You cannot really "create" money, but they do it anyway because the natural pricing mechanisms (laws of supply and demand) take a non-null time to reach a new equilibrium.

So the aim is to take advantage of this transient fluctuation and the way the ripple propagates.

But even if in a perfect (or future) world, everybody reprices instantly all the goods relatively to the new amount of available currency, there is still an effect which is how is distributed the newly "created" currency: who get the new shiny coins? So this is equivalent (if the repricing is done) to a kind of global instantaneous tax-and-subsidies. They tax everybody by the percentage of currency created (relative to the total existing amount), and the lucky ones receiving the fresh money are getting thus a subsidy.


I've wondered the same thing lately whenever someone here posits that defaults cause the destruction of money.

I'd love to see this properly explained, because it definitely has a counter intuitive ring to me.


It works like this: imagine we create some new Bank. Customer A deposits his life savings of 1 million hackerbucks.

Now our Bank loans out 50,000 of those hackerbucks to Customer B. It does this by crediting her account with 50,000 hackerbucks, but notice that Customer A still has 1 million in his account - so now there's 1,050,000 hackerbucks in apparent existence - we've created 50,000 hackerbucks from thin air. If Customer B withdraws the loan money to go spend it, the Bank will have 950,000 in reserves and an asset worth 50,000 (the loan). Customer B will have 50,000 in cash.

What we've actually done is increase the "M2", one of the measures of how much money is in the economy.

If Customer B either repays the loan or defaults on it, that new money disappears. In the loan repayment case, the Bank goes back to having 1,000,000 in reserves, and in the loan default case the loan asset becomes worthless and it is left with only 950,000 in reserves (the other 50,000 is out there with wherever Customer B spent it).


Thank you for clarifying.

So the bank's speculative asset loses value: I struggle to see this as money being destroyed as it wasn't actually money, it was an asset with a price attached to it which has now changed. In contrast to money sat in your bank account, the price was never redeemable (you couldn't go spend it on beer) unless you used that asset to get the debtor to pay you back (or convinced someone else it was worth buying from you as a speculative asset). You might as well say money is destroyed when share prices tumble. Maybe this is the point of such arguments, to make the case that money is no different to any other asset, but we don't tend to treat it like that in reality. Or do we?


It's money in the same sense as the money that was created by the loan in the first place is money - it's not physical currency, but the insight is that the money created by financial means like this bids up prices in the same way as literally minting extra physical currency and distributing it would.

When that loan asset is written down the bank has to make up the difference from its equity - this ends up reducing the amount of loans it can write, so you get a contraction in the monetary supply.

So in the end, I guess the shorter answer is that a default destroys money in the same way that writing a loan creates it - you might well complain that no actual currency has been created or destroyed, but the argument is that it has a similar overall effect.


If Customer B repays the loan, the new money disappears but the money made in interest stays. Eventually does a bank get to a point where it has enough real money, that it can lend it out instead of increasing the money supply?


Whoever defaulted spent the money on something else, so it's still in circulation somewhere, just not in a form the original creditor can get hold of.


That would be my intuition too. But if you go down almost any "this crisis won't cause inflation because..." rabbit hole on hacker news, you should see multiple unopposed claims that defaults lead to deflation through the destruction of money.

I'm pretty ignorant in this field, and usually I've been a day or so behind the posts (missing the window to press for more information), but I feel like there's definitely some contention there.


Here's how I would think of that:

Suppose I'm a bank, and I lend you $10 to buy apple tree seedlings. You spend all $10 on seedlings as promised.

The person who sold you the seedlings has $10. You have the seedlings. I have an expectation of getting $10 in the future, presumably from your sales of apples.

Because most people repay their loans, I'm confident I'll get the $10 back, and being a bank, my business is lending money. I might treat the $10 loan as $7 on my balance sheet when I decide how much money is safe to lend out.

Then the price of apples crashes. You come to me and say, 'look, there's no way I'll make $10 selling apples in the time I promised to repay you. Best I can do is deliver you the seedlings or sell them to my neighbor for $3 and give you that'. I grumble a little, but take your deal.

The person you bought the seedlings from still has $10. Your neighbor now has the seedlings and $3 less. I now have 3 real dollars instead of 7 hypothetical dollars. In other words, 4 hypothetical dollars disappeared. When I decide how much to lend out, I'll be basing that on $3 I know I have, instead of the $10 I thought I'd probably get back. I don't lend as much money to aspiring orchardists (orchardeers?), and the price of apples rises.

Edit: This fragility is probably a major factor why some people are so against fractional reserve banking (my counting hypothetical dollars as having value) but without that hack, there's no saying I could have lent you the original $10, so it's a bit of a double-edged sword.


So in this case the business did not create as much wealth as intended (it produced apples which turned out not to be needed as much they had originally planned).

The default is a side effect of that outcome, not its cause.


Could you not see the $4 disappearing on the GP scenario?


I can see that the bank ended up with $4 less than it planned to, but I don't see that as money being destroyed. It happened because the original estimate of hypothetical dollars was wrong. (Also if the hypothetical dollars are "money" then you're double counting it: creating $10 in circulation has required creating $17 overall which strikes me as poor notation to say the least). (Also if all had gone according to plan, the extra $4 would have come from the pockets of people buying apples. That $4 is still in circulation, either in the same pockets or it got spent elsewhere).

Suppose I buy a painting. I believe it to be an original Van Gogh so I pay $10 million for it. I then find out it is fake, and worthless. Was $10 million (of money) destroyed? Of course not, I just mis-valued an asset. Suppose it then turns out to be real after all. Owing to the fascinating history of this painting it is now valued at $20 million. Was $10 million of money created (relative to the moment when I originally thought it was a Van Gogh)? No. Was $10 million of wealth created? Yes as the world now has one more thing worth $10 million in it.

Money != wealth, even in the materialist sense where wealth consists purely of goods and services. Money is a metric we use to keep track of wealth, and in general it's considered helpful if that relationship holds, so if we're trying to maintain that relation rigorously the central bank should print another $10 million (or create it by making loans) to reflect our knowledge and appreciation of the Van Gogh - if it doesn't then the existing fixed quantity of money in the system will now be representing a greater quantity of wealth, causing deflation.

As I said in my other post I am not an economist by training. If the economists want to call this thing that got created/destroyed here "money" then I guess I should let them, but I would like to hear a good reason why it makes sense to do so, and I haven't heard one. Absent of a good reason I might as well call it haddock. Or, considering the OP was asking which things that could be explained better, we could acknowledge what any good programmer knows: part of a good explanation is choosing the right names for things.


As I think of it, credit (what got destroyed) and currency (what was used to create the credit) are interchangeable - $10 of credit buys as many seedlings as $10 of currency, hence we think of them as one 'thing' - I might call 'money' the category including both, though I'm no expert in economics either and these might be nothing like the official jargon. But what does seem reliable is that destroying $4 of credit has the same effect on the price of apples as would destroying $4 of currency, though the latter is a more rare event.


The difference is that it's the liquidation process from defaults that causes deflation, not the defaulting itself. Without the liquidation process, if you assume defaulting had no consequences, that's indeed inflation - as everyone is allowed to create money without consequence.


Thanks. I think this point has made it a lot clearer.

So sure, the loaned money might still be in the system in some naive sense, but value has been destroyed in the asset price? Suddenly a lot less money buys a lot more asset and that's where we find the deflation.

If I borrow 1M for an asset in good times and can't pay it back, the creditor gets the asset and probably gets a good portion of that 1M back. If that same scenario plays out in bad times and my whole street defaults on the same asset at once, there's a resulting fire sale and far more value is destroyed (including being wiped off neighbouring, non-creditor-owned assets of the same type) than money added by leaving the loan sloshing around somewhere else in the economy.


Sure, and that's just like a net effect of a gift from the creditor to the debtor, which doesn't have any effect on the money supply.


PhD student here. Not expert on all those questions but:

> Macroeconomics. Central banks are "creating" a trillion here, a trillion there, like nobody's business. What is the thought process that central bankers have gone through to make these decisions?

The general consensus is that central banks should stay passive and keep prices stable. However, in periods of crisis, like the one we live in, the central bank could support economy. In ordinary times, creating trillions would lead to inflation. But here the idea is more to save the economy in the short term because it's always cheaper than reparing it. Central bankers agreed to create trillions such that banks do not go bankrupt like they did in 1929. By creating trillions, they also keep interest rates low for government such that they can still borrow.

> But what are the consequences?

Some inflation. Another consequence is that investors will invest in riskier assets afterward to keep their profitability target. (Again, because lending trillions will lower the interest rates)

> Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..)?

They usually buy low risk, higly liquid assets. Putting trillions in startups is infinitely infinitely complicated for a central bank because it implies high monitoring costs, and it also takes a lot of time to create those kind of contracts. Remind that the goal is to provide lot of liquidity to the economy as fast as possible. There is also an academic debate about giving money directly to the general public (known as "helicopter money"), but with little attention from central bankers.

> And then, what happens if a debtor pays back its debt? Is that money consequently getting "erased" again ?

Yep, pretty much... Appart from fiat money, money is constantly created and destroyed. It is mostly created by private banks when they grant loans. And it is destroyed when you repay it. Of course they cannot do whatever they like and create at will, but remember that "Deposits DO NOT make the credits (but in some way it defines how much you could create)"

> What percentage of the debt is expected to default and how much is expected to be paid back? What would happen if a country defaults on a loan given by a central bank?

Central banks buy bonds, and bonds are pretty much always paid back. And if not, the central bank will not suffer much. Cases of countries not reimbursing are very scarce and exceptional (I can just think of Argentina). Anyway, a country CANNOT go bankrupt like a person. And in general, comparing countries with individuals or companies is not a good idea. Countries are here pretty much forever (in a financial sense), you don't. Countries can waive taxes, individuals cannot.


>> And then, what happens if a debtor pays back its debt? Is that money consequently getting "erased" again ?

>Yep, pretty much...

I would add that if the system is fractional reserve then it increases the proportion of the bank's reserve allowing more money to be created. So while it's technically true that it's destroyed you could see the next loan as its reincarnation, no..?

I didn't go here in my response above because my vague understanding is that we're not strictly a fractional reserve system any more, though I don't understand how.


There's also a concept of "capital deepening". Basically, while vague, when your monetary supply growth outpaces GDP, which is not hard to do with low single digit GDP growth you have more capital available for "one unit of GDP". Therefore asset prices go up.

At least for non private companies ...asset prices go up....auctions clear at values with maximum leverage...recession....monetary stimulus....repeat.


My thoughts on this Modern Monetary Theory is that central banks are just creating these trillions, but they are doing it at a relative rate to each other.

I think eventually it will have bigger consequences, but it will take some time for these trillions to filter on down.

I remember once outcome of the 2008 crisis was that consumer goods like cereal boxes stayed the same size, but the bag inside the box got smaller.


Not sure I'd class it as a science, but here's my take... though I'm no expert and certainly agree this stuff could be better explained!

> Macroeconomics. Central banks are "creating" a trillion here, a trillion there, like nobody's business. But what are the consequences?

Nobody quite knows; it's still hotly contested between left wing lovers of Keynes and right wing believers in austerity.

> What is the thought process that central bankers have gone through to make these decisions?

Probably largely a political one. Central banks may be trying to fulfill a remit set by law (e.g. bank of england: keep inflation below x%) and are trying to deliver on that. (why? too much or too little inflation both cause problems, I guess we somehow reached consensus on a "sane" amount that keeps pace with genuine growth of wealth within the economy)

> Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..) and why not others (e.g. stocks or put money into startups)?

I think this is about distributed decision making. The central bank does not have the expertise to decide which stocks or startups represent the best investments. The examples here involve lending money to government, presumably the idea being the latter is better placed to decide what to do with the money. Another example is buying assets from other banks, which are again better placed to decide which businesses/homeowners/etc represent a more sound investment as they do it on a daily basis (from a profit/loss point of view ... of course we debate whether or not that's the case on a societal level).

> What would happen if a country defaults on a loan given by a central bank?

Internally it would depend on laws/balance of political power within the country. Between countries, depending on the currency the country could do crazy stuff like print excessive amounts of money to repay the loan (Germany did this in the 1930s leading to hyperinflation) or they could just as you say, default. The country's credit rating would then be downgraded, making it harder for them to raise credit in future.

> Would the central bank then go ahead and seize and liquidate assets of the country under a bankruptcy procedure to pay off the debt (like it would be standard procedure for individuals and companies)?

Not the bank, but the country making the loan, may first negotiate some debt relief with strings attached e.g. preferential trade agreements. Beyond that, I have no idea what precedent exists.


>Nobody quite knows

A lot of these things are kind on unknowable because they depend on future human behaviour in ways you can't really predict. A lot of George Soros's theory of reflexivity is along those lines. People think they are calculating on the basis of fundamentals but the things that look like fundamentals are actually functions of human behaviour so the system is inherently unstable. He's made a few bob from that.


I recommend the Planet Money podcast by NPR.


I would like to understand how cellular biology processes actually work. Like, how do all the right modules and proteins line up in the right orientation every time? Every time I watch animations, it seems like the proteins and such just magically appear when needed and disappear when not needed [0]. Sometimes it's an ultra-complex looking protein and it just magically flys over to the DNA, attaches to the correct spot, does it's thing, detaches, and flies away. Yeah right! As if the protein is being flown by a pilot. How does it really work?

[0] https://youtu.be/5VefaI0LrgE


They don't. This is a pet-peeve of mine, and it's reinforced by animation after animation.

Everything is being jostled around randomly. The molecules don't have brains or seeker warheads. They can't "decide" to home in on a target.

The only mechanisms for guidance are: diffusion due to concentration gradients, movement of charged molecules due to electric fields, and molecules actually grabbing other molecules.

It's all probabilities. This conformation makes it more likely that this thing will stick to this other thing. You may have heard that genes can be turned on or off. How? DNA is literally wound on molecular spools in your cell nuclei. When the DNA is loosely wound other molecules can bump into it and transcribe it -- the gene is ON. When the DNA is tightly spooled, other molecules can't get in there and the gene is OFF for transcription. There's no binary switch, just likelihoods.

Everything is probabilistic, but the probabilities have been tuned by evolution through natural selection to deliver a system that works well enough.


Even diffusion isn't some magical force guiding chemicals through the medium. It's just random movement that statistically results in the chemical being spread out. This is the same principle that the 2nd law of thermodynamics is based upon. There's nothing magic to it, it's just the statistically likely end result over many particles.


Yes. It's interesting how powerful and clarifying this model of its-all-just-atoms-bumping-into-atoms is. It's interesting how many people take science courses, but don't really get this.

In the context of Covid19, I see so many people wearing PPE, but failing to act as though they understand that the actual goal is to prevent this tiny virion dust from entering your orifices. Like wearing gloves and a mask, but then picking up unclean item in store then using now unclean gloves to adjust mask and make it unclean.

People seem to think of things as having essences or talismanic effects. Like gloves give you +2 against covid and a mask gives you +5 when it's really all about preventing those virus things from bumping into your cell things.


People understand 'germs' we don't live in a magical culture. It's not that they don't understand contamination they just haven't thought far enough ahead when they adjust their mask.


Masks are for keeping your own particles from spreading far, not the other way around.


> Masks are for keeping your own particles from spreading far, not the other way around.

Masks are for keeping your own particles from spreading far AND for lowering the probability of virions found in the environment from entering your respiratory system.

Masks lower the probability when all other variables are held constant. If someone thinks wearing a mask grants invincibility and in turn chooses to increase their exposure to high viral load individuals or environments, they're putting themselves at risk.


> Masks are for keeping your own particles from spreading far AND for lowering the probability of virions found in the environment from entering your respiratory system.

Both of you may be correct. I think the person you responded to may not have been precise in their framing.

I suspect that you had N95 masks in mind when you wrote masks, which doesn’t negate the point of the person you responded to, if they had surgical masks in mind when they wrote masks. Surgical masks are far more common than N95 masks since they are cheaper and do not provide protection against viral particles for the wearer.


Surgical masks do provide some level of protection against virus droplets and aerosol for the wearer they just are not as effective as N95. Even a teacloth or a scarf wrapped around your face will provide some level of protection to the wearer from virus particles entering their mucus membranes.


As stated, this is not the whole truth. Please stop spreading this myth. This particular myth may actually cost lives.

https://smartairfilters.com/en/blog/n95-mask-surgical-preven... https://smartairfilters.com/en/blog/coronavirus-pollution-ma...


Sorry, my comment was not very clear and is prone to misinterpretation. I'm not saying masks don't keep infection out, but rather that the point of society-wide mask adoption is more to keep unwitting spreaders from spreading so widely. I mean it does both, but as I understand it, it's main value is to attenuate sources than vice versa.

I'm in Taiwan where masks are ubiquitous, and have been upset reading about the slow adoption of masks in the West because it was always from a selfish perspective ("do masks protect ME?") whereas here they're worn for a communal purpose ("how do I protect others?"). How effective they are at blocking incoming infection always seemed like a big distraction to me, since it's been clear from the start that it reduces spray from spreaders talking and coughing, which alone is enough of a reason to adopt it widely.


Man, you and the other what-are-fields post just started me thinking about whether diffusion and fields are just things bumping into things. I know that at the QFT level things like the classical E-field can be expressed as interchange of mediator particles. But then QFT says it's all fields. Hmm...


QFT says it's all fields because it is. Particles simply cannot explain the conjunction of quantum mechanics with special relativity.


I am not so sure about that. When you imagine a "particle", what do you see? Do you see a collection of balls?


How do you mean?

To clarify: a "point particle" is an object with no internal structure, that is, it can be fully described by its coordinates wrt time (ignoring relativity for now). This is a concept, a model which explains many phenomena, a model on top of which you can build many theories. It does not, however, explain the conjunction of QM with special relativity.


It would be great if they showed just one animation up front of the chaotic mess that actually represents reality. They could then show the simplified version so that we can actually see what is going on.


https://www.researchgate.net/profile/Nicolas_Bellora/publica... is one example of the chaotic mess. What that shows is many RNA polymerase molecules walking up a gene. The horizontal line across the middle is DNA. The vertical tails hanging off it are RNA being built as the DNA is transcribed.

What that image drove home for me is:

1) that DNA transcription isn't something that happens rarely, or once-at-a-time. DNA is constantly being transcribed; proteins are constantly being built. The scale and rate isn't something I'd ever been taught.

2) How RNA polymerase works must be taking into account a hell of a lot of congestion. Polymerase molecules must constantly be bumping into each other.

3) How the picture would make no sense whatsoever unless you already know what the mechanism is.

I think it does make sense to start with the idealised process, as long as you follow up with messy reality.


The best programmer analogy I can think of is: imagine a system where every instruction always runs concurrently and every output influences everything with varying probabilities.


I once saw a video that purported to showed the jittering for some simple chemical reaction, it was indeed very enlightening.


It's not so much "magic" as it is the sheer rate of molecular collisions in the cytosol. There are so many collisions happening that at least one of them will do what you want. Here's a back-of-the-napkin example, admittedly with many simplifications:

A tRNA molecule at body temperature travels at roughly 10 m/s. Assuming a point-sized tRNA and stationary ribosome of radius 125 * 10^-10 m, the ray casted by the moving tRNA will collide with the ribosome when their centers are within 125 * 10^-10 m of each other. The path of the tRNA sweeps a "collidable" circle of the radius of 125 * 10^-10 m, for a cross-sectional area of 5 * 10^-16 m^2. Multiplied by the tRNA velocity, the tRNA sweeps a volume of 5 * 10^-15 m^3 per second. Constrained inside an ordinary animal cell of volume 10^-15 m^3, the tRNA would have swept the entire volume of the cell five times over in a single second. Obviously the collision path would have significant self-overlap, but at this rate it's quite likely for the two to collide at least once any given second.

Now, consider that this analysis was only for a single ribosome/tRNA pair. A single ribosome will experience this collision rate multiplied by the total number of tRNA in the cell, on the order of thousands to millions. If a ribosome is bombarded by tens of thousands of tRNA in a single second, it's very likely one of those tRNA will (1) be charged with an amino acid, (2) be the correct tRNA for the current 3-nucleotide sequence, and (3) collide specifically with the binding site on the ribosome in the correct orientation. In actuality, a ribosome synthesizes a protein at a rate of ~10 amino acid residues per second.

Any given molecule in the cell will experience millions to billions of collisions per second. The fact that molecules move so fast relative to their size is what allows these reactions to happen on reasonable timescales.


I'd love to see a form of physical analysis like this extended to a statistical analysis of the likelihood of abiogenesis.

I know 4 billion years is a long time and the earth has a lot of matter rattling on it at any given time, but if every atom in the universe was a computer cranking out a trillion characters per second, you'd only have a 1 in a quarter quadrillion chance of making it to 'a new nation' in the first sentence of the Gettysburg address. Seeing the complexity in even the most trivial biological system just makes me scratch my head and wonder how its possible at all.

I'm not invoking God here. I just see a huge gulf in complexity that is difficult for me to traverse mentally.


Fantastic answer. I don't know what I expected, but I find ~10 amino acid residues a second to be somewhat low.


The issue with these animations is that they're getting rid of all the thermal noise. In reality, single proteins are flying around the whole length of the cell many times a second, just from their thermal motion. And when processes like DNA transcription happen, they're not like a regular conveyor belt -- a fraction of the time the machine will even accidentally run steps in reverse! However, if any of this were shown, the animations would become impossible to understand.


Yes to getting rid of thermal noise. No ish? to single proteins flying around the cell that fast. The cytosol is incredibly jam-packed and things are getting hung up on other things so we'd expect the mean free path to actually be quite small for the larger biomolecules.


just once i would like to see the realistic animation though even if it's impossible to understand


https://www.youtube.com/watch?time_continue=42&v=uHeTQLNFTgU

This comes close -- It shows the jittery thermal motion of this tiny machinery, instead of nice smooth glides.


this segment is not the worst, but the full version of inner life of the cell is terrible. Because they cheated, by reversing highly symmetrical processes, for example:

https://www.youtube.com/watch?v=B_zD3NxSsD8&t=3m17s

The artistic director has a ted talk where he talks about how beautiful biological processes are, and it's like no, man, you made it look that way.

If you want a really fantastic video that captures just how messy and random it is I recommend the wehi videos, like the one on apoptosis, where the proteins look way more derpy than the secret life of the cell: https://www.youtube.com/watch?v=DR80Huxp4y8 There's a couple of places where they have a hexameric protein where things magically snap into place, but I give them a pass because the kinetics on that are atrociously slow. Let's just say for the sake of a short video the cameraman happened to be at the right place at the right time.


Oh my that facepalm dreadful. Thank you! That gives me a new high-water mark for misleading biomolecular visualization computer graphics content. Snagged a copy.

When most everything is unmoving, it's "obvious"... well no, not to students, but... there's no pretense of doing anything other than stitching together an extremely selective set of "snapshots", to tell a completely bogus narrative of smooth motion.

Here it seems something like a Maya "jiggle all the things" option has been turned on. Making it sort of kind of look like you're being shown more realistic motion. But you're so not. It's the same bogus smooth narrative, now with a bit of utterly bogus jiggle. Those kinesin legs still aren't flailing around randomly. Nor only probabilistically making forward progress. And the thing it's towing still isn't randomly exploring the entire bloody space it can reach given the tether, between each and every "step". It still looks like a donkey towing a barge, rather than frog clinging to rope holding a balloon in a hurricane.

And given that the big vacuole or whatever should be flailing at the timescale defined by the kinesin feet, consider all those many much smaller proteins scattered about, just hanging out, in place, with a tiny bit of jiggle. Wow - you can't even rationalize that as being selective in "snapshots" - those proteins should just be blurs and gone.

And that's just the bogosity of motions, there's also... Oh well.

So compared with older renders, these new jiggles made it even harder to recognize that all the motion shown is bogus. And not satisfied with the old bogus motion, we've added even more. Which I suggest is dreadful from the standpoint of creating and reinforcing widespread student misconceptions. Sigh.


you might like this render better:

https://www.youtube.com/watch?v=DR80Huxp4y8

here's the artistic director for the inner life of the cell (the worse one) going on and on about how "beautiful" the science of biology is:

https://www.ted.com/talks/david_bolinsky_visualizing_the_won...


> artistic

Yeah. One might for example reduce reinforcement of the big-empty-cell misconception by briefly showing more realistically dense packing, eg [1], before fading out most of it to what can be easily rendered and seen. But that would be less "pretty". Prioritizing "pretty" over learning outcomes... is perhaps a suboptimal for education content.

> better

But still painful. Consider those quiet molecules in proteins, compared with surrounding motion. A metal nanoparticle might be that rigid, but not a protein.

One widespread issue with educational graphics, is mixing aspects done with great care for correctness, with aspects that are artistic license and utter bogosity. Where the student or viewer has no idea which aspects are which. "Just take away the learning objectives, and forget the rest" doesn't happen. More like "you are now unsalvageably soaked in a stew of misconceptions, toxic to transferable understanding and intuition - too bad, so sad".

So in what ways can samplings of a protein's configuration space be shown? And how can the surround and dynamics be shown, to avoid misrepresenting that sampling by implication?

It can be fun to picture what better might look like. After an expertise-and-resource intensive iterative process of "ok, what misconceptions will this cause? What can we show to inoculate against them? Repeat...". Perhaps implausibly intensive. I don't know of any group with that focus.

[1] https://www.flickr.com/photos/argonne/8592248739


david goodsell's pictures are fantastic. I used to work down the hall from him!


Agreed; cool, seems a neat guy. And much of his work is CC-BY, thus great for open education content. Hmm, the Wikimedia Commons capture of his work seems to be missing quite a bit. Oh nifty, there's now an interactive version of his 2014 "Molecular Machinery: A Tour of the PDB".[1]

[1] https://cdn.rcsb.org/pdb101/molecular-machinery/ [] http://pdb101.rcsb.org/sci-art/goodsell-gallery [] http://pdb101.rcsb.org/motm/motm-by-date [] https://cdn.rcsb.org/pdb101/molecular-machinery/


At least there is some water there. But what strange force is that holding proteins together when they are completely out of alignment, and keeping the water away from everything else?


Well, OP did say "even if it's impossible to understand" so if it is in fact in any way misleading, then my lawyers assure me that I may claim the full privileges of a contextual get-out-of-jail-free card for linking to it, and am hereby fully absolved of any intellectual harm caused to any and all individuals who may have viewed it.


Ha. I've wondered if increasing embarrassment might reduce long-term stable misconceptions in education content. Like astronomy texts getting the color of the Sun wrong. Or wing lift discussed elsewhere. But making textbooks liable for intellectual harm... wow. What might the internet, media, politics, thought and conversation look like, if we were all liable for negligent intellectual harm?


It would be pretty boring, proteins bouncing around randomly and occasionally honking up, substrates flying around like rifle bullets sometimes hitting the target, and everything smooshing around in random directions. If you’ve seen Brownian motion, you’ve seen what is happening to all the molecules but at 1/100 the length scale. Nothing stays put. Everything is moving fast and far on the scale of proteins and small molecules.


Fast, yes. Far, well, the mean free path in a cell is very short.



Any time something "magically lines up", it means that those molecules randomly float around until the right ones bump into each other.

Once they are in close enough proximity to bump into each other, intermolecular forces can come into play to get the "docking process" done.

For something like transcription, once they are "docked", think of it like a molecular machine - the process by which the polymerase moves down the strands is non-random.

There are also several ways to move things around in a more coordinated fashion. Often you have gradients of ion concentration, and molecules that want to move a certain direction within that gradient. You also have microtubules and molecular machinery that moves along them to ferry things to where they need to be. You can also just ensure a high concentration of some molecule in a specific place by building it there.


Float is the wrong word to use I think. Float implies gravity and water. At the scale of a cell gravity is not as important as intra-molecular forces like van-der-waals forces, and fluids do not behave like we think.


A friend of mine showed me this writeup when I asked a similar question, and it helps to clear up a lot of the "magic" movement:

http://www.righto.com/2011/07/cells-are-very-fast-and-crowde...

But in a nutshell, the animations are heavily idealized, showing the process when it succeeds, slowing it way, way down, and totally ignoring 90% of the other nearby material so you can see what's going on. Then you remember that you have just a bajillion of cells within you, all containing this incredibly complex machinery and... it's really kindof humbling just how little we actually know about any of it. Not to discredit the biologists and scientists for whom this is their life's work; we've made incredible amounts of progress over the last century. It's just... we're peeking at molecular machinery that is so very small, and moves so quickly that it's nigh impossible to observe in realtime.


A few different things help everything work:

1) Compartmentalizing of biological functions. Its why a cell is a fundamental unit of life, and why organelles enable more complex life. Things are physically in closer proximity and in higher concentrations where needed.

2) Multienzyme complexes. Multiple reactions in a pathway have their catalysts physically colocated to allow efficient passing of intermediate compounds from one step to the next.

https://www.tuscany-diet.net/2019/08/16/multienzyme-complexe...

3) Random chance. Stuff jiggles around and bumps into other stuff. Up until a point, higher temperature mean more bumping around meaning these reactions happen faster, and the more opportunities you can have for these components fly together in the right orientation, the more life stuff can happen more quicky. There's a reason the bread dough that apparently everyone is making now will rise faster after yeast is added if the dough is left at room temp versus allowed to do a cold rinse in the fridge. There are just less opportunities for things to fly together the right way at a lower temperature.

3a) For the ultra complex protein binding to the DNA, how those often work in reality is that they bind sort of randomly and scan along the dna for a bit until they find what they're looking or fall off. Other proteins sometimes interact with other proteins that are bound to the DNA first which act as recruiters telling the protein where to land.


The common theme there is constrained proximity. To give random chance more of a chance.

My favorite illustration was a video of simulated icosahedral viral capsid assembly. The triangular panels were tethered together to keep them slamming into each other. Even then, the randomness and struggle was visceral. Lots of hopeless slamming; tragic almost but failing to catch; being smashed apart again; misassembling. It was clear that without the tethers forcing proximity, there'd be no chance of successful assembly.

Nice video... it's on someone's disk somewhere, but seemingly not on the web. The usual. :/

> yeast

Nice example. For a temperature/jiggle story, I usually pair refrigerating food to slow the bacterial jiggle of life, with heating food to jiggle apart their protein origami string machines of life. With video like https://www.youtube.com/watch?v=k4qVs9cNF24 .

> Compartmentalizing

I've been told the upcoming new edition of "Physical Biology of the Cell" will have better coverage of compartmentalization. So there's at least some hope for near-term increasing emphasis in introductory content.


Coincidentally I'm previewing PBotC just now. It looks really promising. Do you know roughly when the new edition is expected? Or if you have any favorite books on how things work at that scale, I'd be grateful for the pointer. (I've read a popular book by David Goodsell and am halfway through a somewhat deeper one.)


> PBotC [...] when the new edition

No idea, sorry.

> favorite books on how things work at that scale

I've found the bionumbers database[1] very helpful. Google scholar and sci-hub for primary and secondary literature. But books... I'd welcome suggestions. I'm afraid I mostly look at related books to be inspired by things taught badly.

The bionumbers folks did a "Cell Biology by the Numbers" book... the draft is online[2].

Ha, they've done a Covid-19 by the numbers flyer[3].

If you ever encounter something nice -- paper, video, text, or whatever, or even discussion of what that might look like -- I'd love to hear of it. Sorry I can't be of more help.

[1] https://bionumbers.hms.harvard.edu/search.aspx [2] http://book.bionumbers.org/ [3] http://book.bionumbers.org/wp-content/uploads/2020/04/SARS-C...


Thanks! I guess I'll try the bionumbers book first.

I'll keep you in mind, too.


I studied bioinformatics and found the standard textbook, Albert's "Molecular Biology of the Cell"[0] to be one of the most captivating books I've read. It's like those extremely detailed owners' manuals for early computers, except for cells.

The amount of complexity is just absolutely insane. My favourite example: DNA is read in triplets. So, for example, "CAG" adds one Glutamine to the protein it's building[1].

There are bacteria that have optimised their DNA in such a way that you can start at a one-letter offset, and it encodes a second, completely different, but still functional protein.

I found the single cell to be the most interesting subject. But of course it's a wild ride from top to bottom. The distance from brain to leg is too long, for example, to accurately control motion from "central command". That's why you have rhythm generators in your spine that are modulated from up high (and also by feedback).

Every human sensory organ activates logarithmically: Your eye works with sunlight (half a billion photons/sec) but can detect a single photon. If you manage to build a light sensor with those specs, you'll get a Nobel Prize and probably half of Apple...

[0]: https://amzn.to/2zzDt8P

[1]: https://en.wikipedia.org/wiki/DNA_codon_table


"The distance from brain to leg is too long, for example, to accurately control motion from "central command"

As a dancer, I have been fascinated by that fact. It means that dancers do not dance to the beat as they hear it - it takes too much time for the sound to be transformed by the ear/brain into an electrical pulse that reaches your leg. Instead, all dancers have a mental model of the music they dance to that is learnt by practice/repetition.

Dancing is just syncronizing that mental model to the actual rhythm that is heard. When I explained that to a bellydancer friend she finally understood the switch that she had made from being a beginning dancer to an experienced dancer who 'dances in their head'


You can clap your hands to a calibrated delay from the previous beat that you heard (predicting the next beat before you hear it). This is analogous to the principle of a phase-locked loop, which gradually adjusts an internal oscillator until it matches an external frequency. That internal oscillator can emit a beat just before the real one, offset just enough to cancel all the delays in the processing path.

This only works if the beat you're hearing is sufficiently stable.


Yeah, you often send commands several beats in advance. And then there's some lag too, because muscles are fairly viscous and take a bit of time to start up. You're basically dancing in the future, because you are behind. I think we just run pre-baked programs (from a lot of practice) and adjust their timings on the fly every few beats or a bar.


I guess the same must apply to a soccer player, except instead the mental model is about the trajectory of the ball.


The Albert's MBoC is pretty much known as the reference textbook where I studied.

Note that the 4th edition is (sortof) freely available at the NIH website. The way to navigate through that book is bizarre though, as the only way to access its content is by searching.

https://www.ncbi.nlm.nih.gov/books/NBK21054/


Cells are tiny and the speed of sound is how fast air molecules move. Proteins are also not bouncing around as fast but it’s very still quick relative to their size. Next, often there are multiple copies of each component. That’s half the story, larger cells also have various means to clump things together to improve the odds. https://en.wikipedia.org/wiki/Endoplasmic_reticulum

PS: Speed of sound is 343 m/s, diameter of a cell nucleus is ~ 0.000006m to give an idea.


Speed of sound in water is faster.


Yep, and speed of sound is lower than the average speed of individual molecules. But, I was aiming for an intuitive understanding rather than accuracy involving brownian motion etc.


From a physics perspective I bet you have two things happening:

1. These molecules are moving around a lot. The kinetic energy of molecules at room or body temperature gives them impressive velocity relative to their scale, and they're also rotating altogether and internally.

2. Compatible molecules are like magnetic keys and locks. They attract each other and the forces align with meeting points. The same way that proteins fold spontaneously.

So the remaining part is getting concentrations appropriate for what you want to happen - and that's a matter of signaling molecules and "automatic" cell responses to changes in equilibrium. It's a really chaotic system and it's a wonder it works at all.

I imagine that's also one reason life is imprecise, i.e. no two individuals are alike even with identical genes. There's a lot of extra "entropy" introduced by that mess of a soup.


there are some animations that show how fast molecules and proteins go around in a cell, it's basically a bunch of extremely fast collisions and interactions going at random that end up falling into proper configurations. The way Science is taught in molecular biology (as in, visually, with proteins binding to receptors just like if it were fate) is usually completely wrong.


I recently started taking insulin. Check out the molecular structure for that. It blows me away how complex it is.


By molecular structure alone Insulin is one of the simplest proteins, even (though it's complex in ways you don't see by looking at a static picture of it, lifecycle, oligomeric interactions)


Compared to something that isn't a protein, it's pretty complex.

It's like how the source code to `ls` is simple because it's one of the most basic Unix programs, or something like that.


I really like this video as it shows diffusing proteins at a realistic concentration: https://www.youtube.com/watch?v=VdmbpAo9JR4


I find most explanations of the Equivalence Principle that lies at the foundation of General Relativity to be very lax.

To wit, the idea is that you cannot distinguish whether you are in an accelerated frame or in a gravitational field; alternatively stated, if you’re floating around in an elevator you don’t know whether you’re freefalling to your doom or in deep sideral space far from any gravitational source (though of course, since you’re in an elevator car and apparently freefalling... I think we’d all agree on what’s most likely, but I digress).

Anyway, what irks me that this is most definitely not true at the “thought experiment” level of theoretical thinking: if you had two baseballs with you in that freefalling lift, you could suspend them in front of you. If you were in deep space, they’d stay equidistant; if you were freefalling down a shaft, you’d see them move closer because of tidal effects dictated by the fact that they’re each falling towards the earth’s centre of gravity, and therefore at (very slightly) different angles.

Of course, they’d be moving slightly toward each other in both cases (because they attract gravitationally) but the tidal effect presents is additional and present in only one scenario, allowing one to (theoretically) distinguish, apparently violating the bedrock Equivalence Principle.

I never see this point raised anywhere and I find it quite distressing, because I’m sure there’s a very simple explanation and that General Relativity is sound under such trivial constructions, but I haven’t been able to find a decent explanation.


You're right that this is glossed over in popular explanations, but the point you make is exactly the starting point for all formal courses and textbooks.

The first part of the argument is that for single point particles falling, the effect of gravity is the same for all particles. This suggests that we should model gravity as something intrinsic to spacetime itself, rather than as a field living on top of spacetime, which could couple to different particles with different strengths.

The second part of the argument, which is what you point out, is that gravity can have nontrivial tidal effects. (This had better be true, because if all gravitational effects were just equivalent to a trivial uniform acceleration, then it would be so boring that we wouldn't need a theory of gravity at all!) This suggests that whatever property of spacetime we use to model gravity, it should reduce in the Newtonian limit to something that looks like a tidal effect, i.e. a gradient of the Newtonian gravitational field. That leads directly to the idea of describing gravity as the curvature of spacetime.

So both parts of the argument give important information (both historically and pedagogically). Both parts are typically presented in good courses, but only the first half makes it to the popular explanations, probably out of simplification.


> it should reduce in the Newtonian limit to something that looks like a tidal effect, i.e. a gradient of the Newtonian gravitational field.

Can you please explain to me how you went from"looks like a tidal effect in the Newtonian limit" to "a gradient of the Newtonian Graviational field"?


"Tidal effects" are defined in terms of having different gravitational fields in one place than another (i.e. the tidal bulge near to the moon occurs because the moon's field is stronger there).


That's not quite true, as illustrated by the tidal bulge opposite the moon.

Tidal forces occur much more due to the difference in the direction of gravity than due to the difference in magnitude.


The elevator car is a thought experiment that draws attention to the equivalence in sensation of acceleration on one hand, and being in a uniform gravitational field on the other hand. As you correctly point out, this particular thought experiment breaks down when you consider that all of the gravitational fields that we are accustomed to are non-uniform, and have apparent tidal forces.

The real principle of relativity is a bit more subtle (sometimes called the strong principle): that the effects of gravity can be explained entirely at the level of local geometry, without any need for non-local interaction from the distant body that is generating the gravitational field. To describe the geometry of non-uniform fields, we need more sophisticated mathematical machinery than what is implied by the elevator car thought experiment, but nonetheless, the elevator example is a useful launching point for that type of inquiry.


Yeah the problem is that that the equivalence principle is a _local_ property that cannot really be expressed precisely in standard English.

Clearly it will fail given a big enough lift to experiment in, since a big enough lift would essentially include whatever object is creating that gravitational pull (or enough to conclude its existence from other phenomena). However these effects are nonlocal, you need two different points of reference for them to work (like your two baseballs). In fact most Tidal forces are almost by definition nonlocal.

The precise definition involves describing curved spacetime and geodesics, but that one is really hard to visualize as a thought experiment. The thought experiment does offer insight though, as it is possible to imagine that, absent significant local variations in gravity, you cannot distinguish between free-fall and a (classical) inertial frame of reference without gravity. This insight provides the missing link that allows you to combine gravity with the laws of special relativity and therefore electromechanics, including the way light bends around heavy objects, which provided one of the first confirmations of this theory.


> you’d see them move closer because of tidal effects dictated by the fact that they’re each falling towards the earth’s centre of gravity, and therefore at (very slightly) different angles.

This point isn't raised anywhere because it's mostly a pedantic point that has nothing to do with the thought experiment. You shouldn't try and decompose thought experiments literally, otherwise you'll get caught up in unimportant details like this. Just assume the elevator is close enough to the earth such that the field lines are effectively parallel, or better yet, just pretend the elevator is in an infinite plate field.


But then again, realizing this problem with the thought experiment is a mark of a sophisticated student. This was the last question on my physics exam in 1991, and I still regret that I went with the simple explanation. I wonder whether the prof was looking for the students who really got it.


The assumption is the acceleration and the gravitation are in the same direction and the same magnitude. The point is that given these two, it's impossible to distinguish the two.

If you think it's sneaky to "implicitly" assume they're in the same direction, I would point out that this is no different from assuming they have the same magnitude. It would be kinda dumb to say "well this 1m/s^2 acceleration can't possibly be equivalent to gravity because gravity is 9.8m/s^2, so the statement is obviously wrong and they're trying to trick me!!"... same thing for direction.


I'm gonna assume that for purposes of the thought experiment you're supposed to envision a point-shaped elevator, not one where you can place two baseballs next to each other.


This was covered on PBS Space Time in an early episode on GR and talked about later as well.



To me, the layperson, the idea that you cannot distinguish whether you are in an accelerated frame or in a gravitational field seems wrong due to a very simple fact.

The force that would be exerted from acceleration versus gravity is different. The force you we think of as gravity comes from a center point that changes with your position while acceleration comes from a uniform direction without regard to your position.


You're thinking of a specific gravity scenario versus a specific acceleration scenario. But the equivalence is true, it was one of the things shown by Einstein.


I wanted to thank everybody who took the time to explain this. Thank you.


I think the elevator scenario is imagining that the earth is a point source, and you are neglecting the (much smaller) gravitational forces for the sake of illustrating a more general phenomenon.


I have two quite bright nieces. When I was explaining the equivalence principle to them, right away they saw that in the gravitational field of the earth there would be tidal effects and in free space with just acceleration, none.

I had to apologize and say that the explanation was over simplified and really it would work, say, only for some creatures living exactly on the floor of the elevator.

One of the two, at a challenging high school, made Valedictorian (surprise to her parents who didn't know she had long been first in her class) then in college PBK, got her law degree at Harvard, started at Cravath-Swain, went for an MD, and now is practicing medicine. Bright niece.


Sort of meta, but I always shudder when someone says that science has "proven" something.

What sets science apart from most other methods of seeking answers is its focus on disproof. Your goal as a scientist is to devise experiments that can disprove a claim about the natural world.

This misconception rears its head most prominently in discussions at the intersection between science and public policy. Climate change. How to handle a pandemic. Evolution. Abortion. But I've even talked to scientists themselves who from time to time get confused about what science can and can't do.

The problem with believing that science proves things is that it blinds its adherents to new evidence paving the way to better explanations. It also leads to the absurd conclusion that a scientific question can ever really be "settled."


Proof never proves it only implies. People are just bad at weighing how much proof there is and how heavily it implies something. Nuance is inconvenient in policy discussion and public discourse.

Science also doesn't seek disproof. It uses both example and counter example to confirm or deny or increase how much one confirms or denies.


Not to be rude, but given current daily attacks on science and the scientific method, I can't let this stand - I think your meta intuition represents a fundamental misunderstanding of how science works.

It is simply wrong to think that scientific questions can never be definitively settled. Clearly there are some hypotheses that have been difficult (and may be impossible) to prove, for example, Darwin's idea that natural selection is the basis of evolution. There's ample correlative evidence in support of natural selection, but little of the causal data necessary for "proof" (until perhaps recently). In the case of evolution the experiments required to prove that natural selection could lead to systematic genetic change were technically challenging for a variety of reasons.

In the case of climate change, the problem again is that the evidence is correlative and not causal. Demonstrating a causal link between human behavior or CO2 levels and climate change (the gold standard for "proof") is technically challenging, so we are forced to rely on correlations, which is the next best thing. But, you are right, it is not "proof".

Establishing causality can be difficult but not impossible - the standard is "necessary and sufficient". You must show necessity: CO2 increase (for example) is necessary for global warming; if CO2 remains constant, no matter what else happens to the system global temperatures remain constant. And you must also demonstrate sufficiency: temperatures will increase if you increase CO2 while holding everything else constant. Those are experiments that can't be done. As a result, we are forced to rely on correlation - the historical correlation between CO2 and temperature change is compelling evidence that CO2 increases cause global warming, but it is not proof. It then becomes a statistical argument, giving room for some to argue the question remains "unsettled".

My point is that there are plenty of examples in science where things have been proven -- DNA carries genetic information, DNA (usually) has a double stranded helical structure, V=IR, F=Ma, etc. And there are things that are highly likely, but not "proven", e.g., human activity causes of climate change.

While some of the issues you bring are remain unproven, what's really absurd is to think that no scientific questions can be settled.


I think this goes against the basis of the scientific method. There is a reason why they say everything is a hypothesis and nothing is every proven. Anyone can propose an alternative model explaining something you call proven; you calling something proven does not inherently make that explanation correct.

This is not mutually exclusive with being against the attacks on science. Just because we shouldn't treat things as proven doesn't mean we can't come to a general consensus on a topic and act as if it was true. Climate change is real. Evolution is real. Don't inject yourself with bleach. Having a small number of quacks say 'its just a hypothesis and actually god is responsible for climate change and evolution' without any evidence doesn't change the general consensus and doesn't mean we have stop everything until we prove the negative.

Ultimate I think most of us agree in principle. Most of what we're discussing here is minor semantic differences in vocabulary.


Everything in science remains open to be disproved; it wouldn't be science otherwise. That's one way science is different from pure math. Or from religion for that matter.

That said, it is indeed annoying when people who don't understand science interpret "open for disproof" to mean "it's easy to disprove." Quantum mechanics and the second law of thermodynamics could in principle be disproven, but the evidentiary burden would be extremely high. (Insert obligatory Carl Sagan quote here.)


> Establishing causality can be difficult but not impossible - the standard is "necessary and sufficient". You must show necessity: CO2 increase (for example) is necessary for global warming; if CO2 remains constant, no matter what else happens to the system global temperatures remain constant. And you must also demonstrate sufficiency: temperatures will increase if you increase CO2 while holding everything else constant. Those are experiments that can't be done.

No. What is the basis for these claims?

They're both wrong.

It's not true that CO2 increase is necessary for global warming. If the sun got a lot hotter, global temperatures would rise. If non-CO2 GHGs increased, global temperatures would rise. If the overall albedo of the planet changes, global temperatures can rise. There are literally thousands of things that could cause the temperature to rise.

It's also not true that CO2 increase, holding everything else constant, would lead to long term or even medium term warning. We have no idea what the ecosystem will do for any given change in CO2 levels, since there are countless species both who are net producers and net consumers of atmospheric CO2, all of whom have exponential growth and feedback loops.

Even still, even since both of those claims are wrong, CO2 increase may still cause global warming.

Furthermore, the things you claim are proven, are not proven, they are true by definition. All molecules carry information, and the fact that DNA carries genetic information is a direct consequence of the fact that it is DNA. V=IR by definition. F=ma by definition. There's no such thing as a "force" or "mass" or "acceleration" entity per se, these are metrics that are by definition equal in a given physical framework.

There is no way to 'technically' prove anything in science, and the reasons are simple:

(1) The past is gone - you can't access it

(2) You can't see the future

(3) Your knowledge of the present is extremely limited and inaccurate

These are the limitations of the real world, and science does its best to provide utility within that. It only focuses on making future predictions using the observed past as evidence, because you only can do that. You can't check your model in the present, because you can't instantaneously observe anywhere you aren't already observing. Checking your model on the past relies on what you think happened, i.e. what allegedly happened, but there is absolutely no way to truly know.

You can't even really prove anything 'novel' in mathematics, which is the only place where you can actually prove anything, but even there all proofs are effectively just framing something that was already implied axiomatically in a way that allows our limited human minds to see the relevant/useful patterns that aren't immediately obvious to us.

My point is, acting as though you can truly prove anything in science,

> what's really absurd is to think that no scientific questions can be settled

is not only wrong, but in my opinion is a distraction from what science is actually for. It's not about settling questions. Science is never settled, and that's part of what's beautiful about it. It's about reducing our own ignorance and proving our past selves wrong, discovering patterns and models that equip us with the knowledge to build a better world for ourselves and the rest of humanity.

Why lie about being a great soccer player when you're already great at basketball? Let's focus on the beauty of science as a great journey of growth and exploration that accelerates the progress of humanity, instead of trying to make it do something that isn't possible in the real world.


> No. What is the basis for these claims?

"Science", as it is represented in the media, and in turn repeated and enforced (not unlike religion, interestingly) on social media and in social circles.

As opposed, of course, to actual science.

"Perception is reality." - Lee Atwater, Republican political strategist.

https://www.cbs46.com/news/perception-is-reality/article_835...

https://en.wikipedia.org/wiki/Lee_Atwater

"Sauron, enemy of the free peoples of Middle-Earth, was defeated. The Ring passed to Isildur, who had this one chance to destroy evil forever, but the hearts of men are easily corrupted. And the ring of power has a will of its own. It betrayed Isildur, to his death."

"And some things that should not have been forgotten were lost. History became legend. Legend became myth. And for two and a half thousand years, the ring passed out of all knowledge."

https://www.edgestudio.com/node/86110

Threads like this one, and many others like it, well demonstrate the precarious situation we are in at this level. Imagine the state of affairs around the average dinner table. Although, it's not too infrequent to hear the common man admit (which is preceded by realization) that they don't know something. As one moves up the modern day general intelligence curve, this capability seems to diminish. What the exact cause of this is a bit of a mystery (24 hour cable propaganda and the complex dynamics of social media is my best guess) - hopefully someone has noticed it and is doing some research, although I've yet to hear it mentioned anywhere. Rather, it seems we are all content to attribute any misunderstanding that exists in modern society to Fox News, Russia, QAnon, or the alt-right. I'm a bit concerned that this approach may not be the wisest, but I imagine we will find out who's right soon enough.


> I think your meta intuition represents a fundamental misunderstanding of how science works.

It sounds to me like the grandparent is 100% correct.

> It is simply wrong to think that scientific questions can never be definitively settled.

They made no such claim, speaking of intuition.

> Clearly there are some hypotheses that have been difficult (and may be impossible) to prove, for example, Darwin's idea that natural selection is the basis of evolution

I've seen very little evidence in online discussions (Reddit for example) among armchair scientists that the theory of evolution is anything short of cold, hard, scientific fact.

> In the case of climate change, the problem again is that the evidence is correlative and not causal. Demonstrating a causal link between human behavior or CO2 levels and climate change (the gold standard for "proof") is technically challenging, so we are forced to rely on correlations, which is the next best thing. But, you are right, it is not "proof".

Is this (it is not proven) the message they're sending when they say things like "The science is in", just as one example?

> Establishing causality can be difficult but not impossible - the standard is "necessary and sufficient". You must show necessity: CO2 increase (for example) is necessary for global warming; if CO2 remains constant, no matter what else happens to the system global temperatures remain constant. And you must also demonstrate sufficiency: temperatures will increase if you increase CO2 while holding everything else constant. Those are experiments that can't be done. As a result, we are forced to rely on correlation - the historical correlation between CO2 and temperature change is compelling evidence that CO2 increases cause global warming, but it is not proof. It then becomes a statistical argument, giving room for some to argue the question remains "unsettled".

This is not the message I've heard, at all, from any mainstream news source, and it's certainly not the understanding of 95% of "right minded" people I've ever encountered.

> While some of the issues you bring are remain unproven, what's really absurd is to think that no scientific questions can be settled.

What's even more absurd, to me, is how you managed to find a way to interpret his text in that manner. And you're obviously (based on what you've written here), a genuinely intelligent person. Now, imagine how the average person consumes and processes the endless stream of almost pure propaganda, from both "sides" on this topic and many others.

The unnecessarily dishonest manner in which the government and media have chosen to represent (frame) reality to the general public has left an absolutely massive number of easily exploitable attack vectors for "conspiracy theorists" to exploit. And if you are of the opinion that all conspiracy theorists are idiots so you have nothing to worry about, consider the possibility that this too has been similarly misrepresented to you.

If a society chooses to largely abandon things like logic and epistemology in the education of its citizens, thinking propaganda is a suitable replacement, don't be surprised when things don't work out in your favor. If we can barely manage such things here, why should we expect Joe and Jane six-pack to somehow pull it off?


See no evil, hear no evil, speak no evil.

Amen.


It kind of floors me that we're taught science the way it is. Much simpler: Karl Popper's conjecture and refutation. So I tell people that science mandates "I believe something, so I should try to prove it wrong." I think understanding that is significantly more beneficial than repeating the arbitrary n- steps of a scientific method. It's two steps. Keep it simple.


One thing that rubs me the wrong way about this "no proof ever, only disproof" attitude is that it advantages the new hypotheses too much.

Any hypothesis that I invent at this very moment, is from this perspective in the best position a hypothesis can ever be. There is no disproof. There is even no coherent argument against it, because I literally just made it up this second, so no one had enough time to think about it and notice even the obvious flaws. This is the best moment for a hypothesis... and it can only get worse.

I understand that there is always a chance that the new hypothesis could be correct. Whether for good reasons, or even completely accidentally. (Thousand monkeys with typewriters could come up with the correct Theory of Everything.) Yes, it is possible. But...

Imagine that there are two competing hypotheses, let's call them H1 and H2.

Hypothesis H1 was, hundred years ago, just one of many competing options. But when experiment after experiment was done, the competing hypotheses were disproved, and only this one remained. For the following few decades, new experiments were designed specifically with the goal of finding a flaw in H1, but the experimental results were always as H1 has predicted them.

Hypothesis H2 is something I just made up at this very moment. There was not enough time for anyone to even consider it.

A strawman zealot of simplified Popperism could argue that a true scientist should see H1 and H2 as perfectly equal. Neither was disproved yet; and that is all that a true scientist is allowed to say. Maybe later, if one of them is disproved in a proper scientific experiment, the scientist is allowed to praise the remaining one as the only one that wasn't disproved yet. To express any other opinion would be a mockery of science.

Of course, there always is a microscopic chance that H1 might get disproved tomorrow, and that H2 might resist the attempts at falsification. But until that happens, treating both hypotheses as equal is definitely NOT how actual science works. And it is good that it does not.

In actual science, there is something positive you are allowed to say about H1. Something that would make the strawman zealot of simplified Popperism (e.g. an average teenager debating philosophy of science online) scream about "no proof ever, only disproof". The H1 is definitely not an absolute certainty. But there is something admirable about having faced many attempts at falsification, and surviving them.


I've not read Popper directly, so I'd be interested in his actual argument on this.

But, I wonder if you can describe H1 as being a stronger hypothesis than H2 by virtue of withstanding more and higher quality attempts to disprove it?


"Withstanding more arguments" can be gamed by throwing thousand of silly arguments at your favorite hypothesis. And "higher quality" is the part people would disagree about.

I think that when people are essentially honest and trying to find out truth, they can agree on reasonable rules. But there is no way to make the rules simultaneously philosophically satisfactory and bulletproof against people who are willing to lie and twist the rules in their favor.

For example, in real life you usually cannot convince crackpots about being wrong, but that is okay because at some moment everyone just ignores them. If you try to translate this into a philosophical principle, you end up with something like "argument by majority" or "argument by authority". And then you can have Soviet Union where scientific progress is often suppressed using these principles. But what is the alternative? No one can ever be ignored unless you disprove their hypotheses according to some high standard? Then the scientific institutions would run out of money as they would examine, using the high standard, the 1000th hypothesis of the 1000th crackpot.


This is an age-old domain of thought known as philosophy of science [0]. Although, by prepending your post as "meta", perhaps you are already aware of it.

I should add: As a human being, it is probably impossible to separate the scientist from the philosophy in which they explore, proceed with, and promote their work. In some cases, it might not be something they are even aware of. Instead, the scientific system (as a sort of world institution) should itself be designed to always seek out and protect truth, regardless of prevailing contemporary knowledge.

[0] https://en.wikipedia.org/wiki/Philosophy_of_science


> Your goal as a scientist is to devise experiments that can disprove a claim about the natural world.

If this claim was true, it would disallow science to make true claims, because no experiments can disprove such claims. Truth is a delicate matter and can't be handled by simple methods. Questions may not be settled, but they can be difficult to challenge.


> it would disallow science to make true claims

Isn't that exactly how science work? It does not make true claims. It produces statements with disclaimers. If this and this then Y is true, as long as we don't observe Y.

You cannot use the scientific method to definitely say: "X is true".


Yes, that's exactly how it works. Every scientist I've ever asked has stood by that. Science dispels untruth.


I don't know if this would be my "one question" if I could ask the most brilliant minds in science, but something that always bothered me:

When I took physics they basically said "at first scientists were disturbed by the fact that magnets imply that two objects are interacting without any physical contact, but then Faraday came along and said 'the magnets are actually connected by invisible magnetic field lines' and that resolved everything."

How does saying "but what if there's invisible lines connecting them" resolve anything? To be clear, I'm not objecting to any of the actual electromagnetic laws or using field lines to visualize magnetic fields. It's just that I don't get how invoking invisible lines actually explains anything about how objects are able to react without physical contact.

(Also, it is not lost on me I that this question boils down to "fraking magnets, how do they work?")


I'm a physicist specifically working with magnetic systems, but I have very little pre-graduate teaching experience, so take this attempt to answer the question with a grain of salt.

The reason some people regard Faraday's original explanation of the eponymous law (it is worth noting that at the time it was widely regarded as inadequate and handwavy) as illuminating is because Faraday visualized his "lines of force" as literal chains of polarized particles in a dielectric medium, thereby providing a seemingly mechanistic local explanation of the observed phenomena. Not much of this mindset survived Maxwell's theoretical program and it has very little to do with how we regard magnetism today. Instead, the unification of electricity and magnetism naturally arises from special relativity, whereas the microscopic basis of magnetism requires quantum mechanics. There isn't really any place for naive contact mechanics in the modern picture of physics, so in that sense I would regard Faraday's view as misleading.

Finally, I can't end any "explanation" of magnetism without linking the famous Feynman interview snippet [1] where he's specifically asked about magnetism. It doesn't answer your question directly, but it's worth watching all the more because of it.

[1] https://www.youtube.com/watch?v=MO0r930Sn_8


That interview is so good! What a dick, but what a teacher!


What I see at the beginning of that video is somebody who doesn’t want to spend the energy answering a complex question. Then, in the process of dismissing the question he gets drawn in and can’t help himself from really getting into it.

I don’t know anything about Feynman beyond vaguely associating his name with science, but watching this makes me want to seek out more from him.


You're in for quite a treat then. It sounds like you might have more of an interest in his technical work and scientific contributions and teaching materials (of which there is plenty, and of high quality), but personally I quite enjoyed this book of his as well: https://en.wikipedia.org/wiki/Surely_You%27re_Joking,_Mr._Fe...!


That Feynman snippet was so awesome. Flawless. Thanks for sharing it.


It changes the problem from one of action at a distance to local interaction. Instead of "field lines", I would say "field". Field lines are just one way of visualizing a field.

So, if we don't have the notion of fields, then we have a kind of situation of how does object A know about remote object B. Like how does one object know about the motions of literally every other object in the Universe. Perplexing.

Once you come up with the idea of a field, okay you have to at some level accept that there are fields that permeate all of space. But what this intellectual cost buys you is that now an object only has to sense the field local to it to respond to all objects in the universe.

Think of objects bobbing on the ocean. One way to conceptualize that is that any object anywhere could cause this object here to bob in some way. How does this object know about all the other objects? Instead we could say that there is ocean everywhere. Locally, objects bobbing put ripples into the ocean. Locally, ripples cause objects to bob. Each object no longer needs to "know about" every other object it just needs to react to the ripples at its location, and the ripples get sent out from its location.

Does this help?


This, gravity, and quantum mechanics, I think are things that people just accept as is, I don't think anyone really knows how or why it works, it just works. It could be that our brains are not wired to understand how two things can be pulling each other without anything physically connecting them. My knee jerk explanation is that we live in a simulation, and that the simulation is not in anything like a physical world, and we are just not wired to grasp it, just like an ant won't be able to learn learn calculus.


Gravity and magnetism are two phenomena that I always imagined could actually be explained in higher dimensions (except we can't see those higher dimensions so it'd be just speculation).

Imagine two circles in 2D that repel each other the closest you get them together, like magnets do. In 2D it would look like they're interacting at a distance, but maybe in 3D they're two cylinders that are a bit flexible, that are actually touching at the ends, but not in the 2D plane you're observing. The interaction is "properly physical" in 3D but in the 2D plane it seems magical.

That's a way that I imagine it in 2D vs 3D, so this might be similar in 3D vs ND, where N > 3. Of course this is all baseless speculation, but it seems kinda plausible in my head.

Edit: bad drawing of what I meant: https://imgur.com/362tcHg


There are models using higher-dimenions (and lower dimensions, and factional dimensions) to model physical phenomena, but the problem with a lot of them is that you end up with the "fitting n points to an n degree polynomial" problem. It's trivial to create a model that agrees with all observed data, it's a lot harder to make a model that accurately predicts unseen data.


Fascinating thought! Makes sense though. Given n inputs and outputs, you could literally make an if-statement that solves nothing but gives you the 'right answer'. Actually understanding the fundamentals (often unseen) of a problem to then create an elegant, holistic solution that almost prophetically seems to predict the future is the current peak of human thought achievement, IMO.


Gravity is explained that way, it is a curvature in spacetime (4D): https://www.youtube.com/watch?v=CvN13ZE544M


I'm no expert in the matter, but I guess the problem is somewhat solved as the magnet then interacts with the field lines and these with the other magnet. It depends on how real you believe these invisible magnetic lines are. The magnetic field turned out to be a very real entity, as when it is perturbed these perturbations travel at the speed of light, and the second magnet feels them with a slight delay. The perturbations also manifest themselves as EM radiation/light.


It's local (propagates continuously through space, no faster than the speed of light) and if you want you can view it as mediated by particles (photons), although just viewing it as the EM field is fine too (and certainly no less local). So is gravity, which was spooky action at a distance for even longer, once you invent GR.


The answer is related to Relativity. Read this: https://ocw.mit.edu/courses/materials-science-and-engineerin...


I don't think the invisible 'lines of force' really resolved anything in the minds of the 19th century scientists, but what eventually did was acceptance of Faraday's speculation that the lines of force were physically real and existed as some change of state in some medium that existed throughout space.

Maxwell picked up this idea and ran with it, developing a mathematical theory for the dynamics of the electromagnetic field. Instead of one object somehow magically interacting at a distance, interactions between objects resulted from changes in the electromagnetic field that propagated through space.

The final paragraphs of Maxwell's "Treatise on Electricity and Magnetism" are somewhat relevant.

This is 30-40 years after Faraday first wrote about lines of force, and there still wasn't really consensus about how to explain electromagnetic phenomena.

[emphasis added by me]

> Chapter XXII: Theories of Action at a Distance

> ...

> There appears to be in the minds of these eminent men, some prejudice, or a priori objection, against the hypothesis of a medium in which the phenomena of radiation of light and heat, and the electric actions at a distance take place. It is true that at one time those who speculated as to the causes of physical phenomena, were in the habit of accounting for each kind of action at a distance by means of a special aethereal fluid, whose function and property it was to produce these actions. They filled all space three and four times over with aethers of different kinds, the properties of which were invented merely to 'save appearances,' so that more rational enquirers were willing rather to accept not only Newton's definite law of attraction at a distance, but even the dogma of Cotes, that action at a distance is one of the primary properties of matter, and that no explanation can be more intelligible than this fact. Hence the undulatory theory of light has met with much opposition, directed not against its failure to explain the phenomena, but against its assumption of the existence of a medium in which light is propagated.

> We have seen that the mathematical expressions for electrodynamic action led, in the mind of Gauss, to the conviction that a theory of the propagation of electric action would be found to be the very key-stone of electrodynamics. Now we are unable to conceive of propagation in time, except either as the flight of a material substance through space, or as the propagation of a condition of motion or stress in a medium already existing in space.

> Hence all these theories lead to the conception of a medium in which the propagation takes place, and if we admit this medium as a hypothesis, I think it ought to occupy a prominent place in our investigations, and that we ought to endeavour to construct a mental representation of all the details of its actions, and this has been my constant aim in this treatise.


Crypto and practical security. I get tired of the circular “don’t roll your own crypto unless you’re qualified”. How does one become qualified? I don’t feel like I know how to evaluate many of the arguments people make for or against technologies people argue about on HN, such as Signal or different password managers. I feel like “security through obscurity” is a bad thing, and “layers of security” are a good thing, but isn’t all security obscuring something, and how does one evaluate whether a layer is adequate? “Just use bcrypt” - okay, help me understand!


The reason people say not to roll your own crypto is that there is no secret answer to making things secure, we just have smart and creative people bash their heads against a crypto protocol/implement for a long time and hope we found all the problems.

So unless you have a good reason to do something else, and the budget to pay experienced people to bash their heads against it, you should stick to an implementation that has had this effort expended on it.

If you want an intro about common problems in custom cryptosystems, go look at cryptopals or something, but don't get too cocky that you know everything.


It's also easy to dramatically underestimate the order of magnitude of effort involved in "the budget to pay experienced people to bash their heads against it".


Also, what makes me irritated about this blurt is that there are many "layers" of what people could reasonably call "crypto". There are the cryptographic primitives. There are higher-level crypto algorithms and functions that use those primitives. There are even higher-level cryptographic protocols, file formats etc. Then there's actually the application, applying crypto to a real-world problem.

Even in each of those, there are two "levels" of implementation: specifying an exact algorithm that implements a solution to problem x, and actually producing the code that implements the algorithm.

At some level, there is no ready-made solution to every problem. Even if the foundations are implemented by "somebody else", the line's blurry. At which level of (lack of) expertise and which level of "lowness" of the implementation should I start to worry?


> I get tired of the circular “don’t roll your own crypto unless you’re qualified”. How does one become qualified?

Oh, by all means, roll your own crypto, break it, and roll it again. Just do not use it.

Also, break other people's crypto and study theory.

By the way, the advice is not "unless you are qualified". Nobody is qualified to just roll their own. Good crypto is a community project and can not happen without reviewers.


From what i understand, the original context of "security through obscurity=bad" is that its really hard to keep secrets, and its hard to design secure systems, so peer review is really helpful. Thus if the security of your system relies on it being secret, you are probably in a bad place because its hard to keep something so big secret, its hard to redesign the system if it leaks, you probably had less people look at it in order to keep it secret. This is in contrast to just having a password or key secret. You can easily change a password if it gets leaked. You can keep a small password secret much easier than the design of the whole system, etc.

More generally, security is like any other field. You have to evaluate arguments based on the logic and evidence given. The main difference is that with crypto, it is much easier to shoot yourself in the foot and have catastrophic failure, since you have to be perfect and the attackers just have to be right once to totally own you. Thus the industry has standardized on a few solutions that have been checked really really well.

More generally, if you are interested, i would say read the actual papers. The papers on bcrypt, argon2 etc explain what problems they are trying to solve, usually by contrasting with previous solutions that have failed in some fashion. That doesn't mean reading the paper will explain everything or make you an expert or qualify you to roll your own crypto. Nor should you believe just because a paper author says something is a good idea that it actually is. It will however explain why slow hash function like bcrypt/argon2/scrypt were created and are better choices than the previous solutions in the domain like md5.


> I get tired of the circular “don’t roll your own crypto unless you’re qualified”.

It's true, but you need to realize that you're qualified enough only when you understand that you shouldn't roll out your own crypto.

In my opinion, the only person who has credibly demonstrated being able to roll his own crypto is djb (http://cr.yp.to/)

> but isn’t all security obscuring something,

Keeping a secret isn't "obscuring" something, it's hiding it entirely. Security through obscurity is bad because it relies on attackers being dumb. The smartest person in the world cannot be expected to guess a well chosen and kept secret.


You should study cryptanalysis. This is why rolling your own crypto is dangerous. Not just because the result is going to be insecure, but also because it isn’t particularly educational, but it feels like it is. It is easy to convince yourself you know more than you do if you spend a lot of time playing with bad crypto systems.

Edit: I should add that even if you are an expert in cryptanalysis, you still shouldn’t just roll your own crypto. It’s the analysis of the entire community, not the credentials of the author, that makes modern cryptography so strong.


The proper way of interpreting the sentence about "don't roll your own crypto" is that it actually means "don't roll out your own crypto until it has been peer reviewed by many experts". At which point it kind of stops being "your own", in a way.


I don't see it mentioned, but I thought I'd chime in. Even if your crypto algorithm is perfect and works infinitely fast, there's still the problem of implementation. And that's usually not perfect and often leads to practical gaps that can be exploited. During WWII, the German Enigma machines were broken in part due to design errors (like letters wouldn't be encoded to themselves) and user error (like sending messages that start/end the same way). Even if crypto is some day perfect in a sense, it may still be used in imperfect ways that allow one to break it or circumvent it entirely.


I recommend Serious Cryptography by Jean-Philippe Aumasson. After reading it, you will gain enough understanding to compose cryptographic primitives and build your own secure system based on well-known best practices, as long as you don't deviate too much from the golden paths. Although with that, you still won't know how to design or implement these primitives yourself. It's like having a nice toolkit of screwdrivers, hammers, spanners etc to build your thing, but you can't build those tools themselves.


> I get tired of the circular “don’t roll your own crypto unless you’re qualified”

It's not circular, it's a simple flowchart.

Are you writing an app or are you trying to invent more advanced crypto?

"writing an app" -> dont roll your own crypto

"invent more advanced crypto" -> go learn and research crypto history, math, etc..


If you have a good CS background, I highly recommend the lecture notes for the security class I took in undergrad: https://inst.eecs.berkeley.edu/~cs161/sp10/

That's from 10 years ago, so you might be able to find video of a more recent version; try to find a year when Wagner taught, he's great.


Spring had made "Understanding Cryptography" available for free https://link.springer.com/book/10.1007/978-3-642-04101-3


> How does one become qualified?

By attacking crypto--a lot. And submitting your crypto to be attacked by others--a lot. It's the only way to develop the requisite level of humility to design good crypto.


Has someone that thought they were taking LSD ever turned into a permanent schizophrenic zombie or in a mental institution, or is it all urban legend. If someone that didn't know they were predisposed to mental illness, is it applicable to dismiss their experience in order to maintain how safe LSD is?

If any of this is true, are there any sources aside from "my friend's friend's brother took too much and now he is....", and what is the scientific explanation and do we know enough about the mind at all?

I feel like LSD has a lot of contradictory information out there, and the proponents feel the need to hand waive concerns away because it is 'completely harmless and leaves your system in 10 hours'. But when nobody knows what they're actually getting because it doesn't exist in a legal framework, then it muddies the whole experience.

People say certain doses can't do more effect than lower doses after a certain threshold. It seems like the same people say "omg man 1000ug you are going to fry your brain!"

What is the truth? If it "just" had an FDA warning like "people with a family history of schizophrenia should not take it", that would be wildly better than what we have today.

Please no explanation about shrooms. Just LSD the 'research chems' distributed as LSD.


"Has someone that thought they were taking LSD ever turned into a permanent schizophrenic zombie or in a mental institution, or is it all urban legend."

Tangential, and not an answer to your question, but if you're like me, you will be fascinated to learn that there is a drug (MPPP, synthetic opiate) that if cooked incorrectly yields "MPTP"[1] which will give you Parkinsons. As in, forever. You take this drug (at any age) and then you have Parkinsons for the rest of your life.

[1] https://en.wikipedia.org/wiki/MPTP


so therefore it stands to reason that other substances can exist that rewrite your mind to make you ineffective at other behaviors


I mean, not really.

1. There is one substance that rewrites your mind.

2. There is more than one substance that rewrites your mind.

Are very different postulates.


1 opens the existence of 2 as possible


When I looked up into illegal drugs, I found it very difficult to find reliable data.

On one hand you have anti-drug people, usually backed by the authorities. Listen to them and all drugs will make your body rot, give you hallucinations like datura, and for some reason cause complete addiction after a single dose.

Drug users on the other hand will tell you that it not as bad as alcohol/tobacco/coffee/... that concerns are unfounded, that police is the only risk, etc...

The truth is almost impossible to find. Even peer reviewed research is lacking. I guess there are several reasons for that. Availability of controlled substances. Ethical concerns regarding experimentation. Issues with neutrality.

Now from what I gathered about LSD (and psychedelics in general): these are very random. If you take a reasonable dose, you are most likely going to have a nice, fun trip and nothing more. But it can also fuck you up for years, or maybe bring significant improvement in your life. High doses increase the chance of extreme effects and nasty bad trips, but it shouldn't kill you unless you are dealing with industrial quantities. The substance itself is not addictive, but the social context may be. The big problem is that there is no way to tell how it will go for you. There are ways to improve your chances, but it will always be random.

As for fake LSD, there are cheap reagent tests for that. They are not 100% reliable but that's better than nothing. You can also send your sample anonymously to a lab that will do a much more accurate GC/MS analysis for you.


> Drug users on the other hand will tell you that it not as bad as alcohol/tobacco/coffee/... that concerns are unfounded, that police is the only risk, etc...

Sure, some ("plenty", in absolute numbers) will tell you this, but I don't recall being in many forums where that attitude doesn't get significant pushback (as opposed to the anti-drug community). The modern "pro drug" community has a fairly significant culture of safety within it, unlike back in the sixties.

> The truth is almost impossible to find.

There is plentiful anecdotal evidence online. Any clinical evidence, if they ever get around to doing it in any significant volumes, will be utterly miniscule (and I highly doubt more trustworthy, considering what you're working with, and the size of the tests that will be done) to the massive volume of trip reports and Q&A available online, much from people who know very well what they're talking about, not unlike enthusiasts in any domain.

> Now from what I gathered about LSD (and psychedelics in general): these are very random.

Depends on one's definition of random.

> If you take a reasonable dose, you are most likely going to have a nice, fun trip and nothing more.

Effects vary by dose of course, but I've seen little anecdotal evidence that suggest high doses have a different outcome, and plenty that suggests the opposite.

> But it can also fuck you up for years, or maybe bring significant improvement in your life.

See: https://rationalwiki.org/wiki/Balance_fallacy

> The big problem is that there is no way to tell how it will go for you. There are ways to improve your chances, but it will always be random.

I believe this to be true, but don't forget the fallacy noted above.

That said, these things are not toys - extreme caution is warranted.


Permanent schizophrenic zombie, maybe a bit extreme, but severe and traumatic long-lasting psychological damage is a not-uncommon phenomena.

I had a fling with psychedelics in my teens, and everything was great until the one time it wasn't. I was taking psychedelics pretty much every weekend, and by my count have tried over a dozen of them.

Had an experience with LSD which completely shook me to my core and gave me such severe PTSD and trauma that every night I started to have massive panic attacks and needed medical help. My entire worldview and perception of reality was shattered, I wasn't able to "anchor" myself anymore and it all felt like a sham. I was completely dissociated. I also got HPPD: to this day, everything has a sharpened oil-painting type texture to it that increases based on my anxiety level, and I'm sensitive to visual + aural stimuli (loud, brightly-colored places are unpleasant). If I get too anxious, I start to dissociate.

It took ~2 years for the PTSD to subside for the most part, but still if I am under a lot of stress I am liable to have a panic attack and get flashbacks and need to go find somewhere quiet to sit somewhere alone to try to work through it.

LSD being the particular substance has nothing to do with it, in my opinion. I was young, dumb, reckless, and played with fire then got burned. It could have happened with any of the other dozen psychedelics I took, but it just so happened to be LSD the one time that it did.

But I want to add, that while giving me the most nightmarish, traumatizing experience of my life, the best/most positively-profound experience has also been on the same substance. I grew up in a pretty abusive household and didn't do well forming relationships growing up, and had a lot of anger and resentment in my worldview. After taking psychedelics (LSD, 2C-B, Shrooms) and MDMA with the right group of people a few times, my entire perspective shifted. For the first time in my life, it felt like I understand how it felt to be loved, and what "love" was, and how we're "all in this together" so we may as well be good to each other while we're here.

It's been a long time since I've touched any of that stuff and I'm not sure I ever will again, but I don't think it's inherently bad or good. Psychedelics are like knives, they're neutral - can be used as a tool or cut the hell out of you if you're reckless.

---

Footnote: For context, this was probably due to life circumstances/psyche at the time. I was in a relationship with a pretty toxic partner, and my mental state wasn't the greatest. In hindsight, it seems like I was almost begging for a "slap in the face" if you will.


If you don't mind me asking (and this is clearly a sensitive topic so feel free not to reply): What do you mean by PTSD and flashbacks? As in, was the trip so bad that remembering it creates anxiety, or are you reliving unrelated traumatic memories that weren't an issue to live with before using the drug?


It literally feels as if I'm being transported back to that same night, starting to relive it all over again. It's entirely illogical but if you knew what happened it might make more sense (happy to elaborate here and give a brief description of what happened/why it messed me up so bad, I'm perfectly okay to talk about it now).


I'd be interested to hear your story. I've never used psychedelic drugs, but I find their effects fascinating.


I took 300ug of LSD recklessly on a particularly bad day for me, in a particularly uncomfortable setting.

Well, that night went bad. Really, really, life-alteringly bad. For the first time, I had a bad trip. And not like, some mildly uncomfortable thoughts. I got a bad feeling in my stomach from the moment I dosed, and I knew something was going to be different this time.

As I started to come up, the bad feeling and a dark presence grew, and I pulled out my phone. I started a timer, and I watched as the time slowed to a point where it completely stopped. I started looping, I would get up off the couch, walk a few feet, and be teleported back. Over and over.

I realized that I had gotten so high, that time was no longer moving. And if time was not moving, I could maybe never come down. I was stuck here forever. And then the hellish nightmare started.

I felt like I was losing control of myself, like something else was trying to take over, and whoever won the battle, that is the consciousness that would exist. The more I fought, the more painful things got. Pain the likes of which I no one can physically imagine.

Went upstairs and laid down in my bed, began going out of body. I started dying over and over in unimaginable ways in my head, trapped in loops. Pain beyond anything I've ever felt in reality, there was no limit. It was tied to my breath, I realized that it had been so long since I had breathed, I kept forgetting who I was and what was going on, and then I would catch a slight glimpse and remember and fight so hard to take another breath. And there was so much pain in fighting to "survive" and hold on to who I was.

Eventually, the pain/struggle became too much, and I "gave in" and said "okay, I give up, you win, I can't take it anymore, I'd rather die." And that's when it's stopped. There appeared this giant shape of light/energy that was every color at once, and colors we don't have words for, and it "touched me" (could have been me moving towards it, or it towards me, there wasn't really a concept of this).

When it "touched" me, what it "showed" me was something I later learned is called an "Ouroboros", the snake eating it's tail. It showed me what "infinity" really meant, and that was too much to handle and shattered my psyche.

In that moment my body/mind/soul felt like it was obliterated to pieces by some energy beam in the most excruciating, searing pain, and I woke up in my bed having just pissed myself.

It took a long time to piece myself back together after that one.

---

There are a lot of details I've omitted for brevity's sake, but this captures the gist of it.

The majority of my trauma has to do with anything related to loops: think Nietzsche's Eternal Return, general time-loops, fear of time-stopping, etc.

When I have panic attacks I have to stop myself from starting a stopwatch on my phone to make sure time is still moving because it'll cause a feedback loop and ratchet-up the panic, causing the time-dilation to increase in a vicious cycle.


Holy. That sounds intense, to say the least. A part of me likes to experience this, even though you made it abundantly clear, that it did not have a positive impact on your life.


"ego death" is a common aspect of acid trips and the experience seems to come down to your willingness to relinquish control. this reads like what was described. if you were to look up that term you'll see others that will feature similar features - with or without pain, with or without worry.

not having a reliable way to know exactly what you took can amplify the anxiety, when your brain starts filling up with seratonin and whites everything out just like people on their deathbeds report, are you supposed to let go? when your sense of self has been obliterated and the next moment you are in the body of another mammal lost and confused in the forest for an entire lifetime before being transported back into your body and only a minute has gone by - but your trip is to last another 9 hours, should you fight it? Distinct neural networks in your mind that never communicate are now connected, vestigial components of the mind are now being expressed, are you being replaced in a firmware dump and flash?

a lot of people have a friend with them to guide them through an acid trip because trips can be steered with sounds and words, simple chimes, melodies.

would it have helped? very hard to say. but as the author wrote, the bad day and uncomfortable setting did not help. It is similar to a dream state (just radically more intense), where the things on your mind and also happening around you can affect the direction of your dreams.


Yeah, I think it entirely had to do with my inability to relinquish control and "just let go". Although in this context, that was literally what felt like the fight to survive, instead of "being chill". Ego death commonly is either the most horrendous or most nirvanic thing depending on how readily someone gives in.

> when your sense of self has been obliterated and the next moment you are in the body of another mammal lost and confused in the forest for an entire lifetime before being transported back into your body and only a minute has gone by - but your trip is to last another 9 hours, should you fight it?

There was a lot of this, during that out-of-body-period. I existed in multiple places/points in time at once as different people of various ages/genders/nationalities and then occasionally as animals, and lived entire simultaneous lifetimes. At one "time", in places + times A, B, C, D as different living things. Really does a number on your sense of self for a bit, heh.


and that had never happened to you in your other trips?


No, was really strange, I was pretty experienced by then too. Was never the same after.


Did you take 300ug before?

Sorry for the questions, we can talk about it somewhere else, just add an email or protonmail account to your hackernews account I'll mail you there


Did you try LSD after that time in your teens?

I think it is a bit too reductive to say they're neutral, just yet, but I am willing to say they can be used responsibly if the right information actually existed - but like with any science I am open to changing that if the conclusions were found to be different. Again let's just stick with acid instead of all psychedelics.


I tried once or twice after that, both times it turned immediately into flashbacks and led to horrid experiences so I called it quits for good.

I had done it probably ~20-25 times by that point, along with a bunch of other stuff.

    LSD
    Mushrooms
    2C-B
    2C-C
    2C-I
    DMT
    4-AcO-DMT
    5-MeO-DMT
    5-MeO-MiPT
    DOM
There might be some others I've forgotten, it's been a long time.

> Again let's just stick with acid instead of all psychedelics.

What you won't find in academia or textbooks is that, at a high enough dose, all psychedelics feel the same. You reach a point where it's indistinguishable and the unique properties vanish. It's hard to describe if you don't have experience with a bunch of them, but there's this "peak psychedelic state" where they all sort of converge, which is what I only assume is the result of your serotonin receptors getting completely bombed/saturated.

Personally, I was much more of a fan of phenethylamine psychedelics (particularly the 2C series), they're more clear-headed and "light"/enjoyable. The time dilation from psychedelics makes the 12-16 hours from LSD feel like days, and by the end of it, generally the last 4-6 hours you just want to be finished with it already.

It's really difficult to make a blanket-statement like "can be used responsibly" about psychedelics, because it's a dice roll. No matter how cautious you are, there's always the possible that this time, things go sideways. Though most people (when I was in that scene as a teen) couldn't really empathize after my bad trip because they'd never had one, so it's a rare occurrence. Maybe I was psychologically predisposed, who knows.

But I do think that people stand to gain a lot from having a psychedelic experience in their life, and from having an experience taking MDMA and talking with someone they love.


> Though most people (when I was in that scene as a teen) couldn't really empathize after my bad trip because they'd never had one, so it's a rare occurrence.

Yeah this is another thing I've seen.

Online there are lots of stories of "bad trips", like this one.

In person its "what happened? I've never had a bad trip [so what's wrong with you]". It is very unscientific, and for the people that do empathize, it is very reductive to "bad trip". No discussion about PTSD. And then you can't talk to anybody else about it because they are illicit substances.


Just some relevant info I looked up after reading your post....

> Permanent schizophrenic zombie, maybe a bit extreme, but severe and traumatic long-lasting psychological damage is a not-uncommon phenomena.

https://english.stackexchange.com/questions/6124/does-not-un...

https://towardsdatascience.com/an-introduction-to-multivaria...

HOW PSYCHEDELICS REVEALS HOW LITTLE WE KNOW ABOUT ANYTHING - Jordan Peterson | London Real --> https://www.youtube.com/watch?v=UaY0H9DBokA

Jordan Peterson - The Mystery of DMT and Psilocybin --> https://www.youtube.com/watch?v=Gol5sPM073k

> LSD being the particular substance has nothing to do with it, in my opinion. I was young, dumb, reckless, and played with fire then got burned. It could have happened with any of the other dozen psychedelics I took, but it just so happened to be LSD the one time that it did.

https://en.wikipedia.org/wiki/Hallucinogen_persisting_percep...

I have a close friend who had the same experience with excessive use of marijuana, but my money would be on psychedelics being far more likely to produce the outcome you unfortunately experienced. He's much better today, but not entirely "ok".

> But I want to add, that while giving me the most nightmarish, traumatizing experience of my life, the best/most positively-profound experience has also been on the same substance. I grew up in a pretty abusive household and didn't do well forming relationships growing up, and had a lot of anger and resentment in my worldview. After taking psychedelics (LSD, 2C-B, Shrooms) and MDMA with the right group of people a few times, my entire perspective shifted. For the first time in my life, it felt like I understand how it felt to be loved, and what "love" was, and how we're "all in this together" so we may as well be good to each other while we're here.

This sounds rather similar to my friend's story.

Can Taking Ecstasy (MDMA) Once Damage Your Memory?

https://www.sciencedaily.com/releases/2008/10/081009072714.h...

According to Professor Laws from the University’s School of Psychology, taking the drug just once can damage memory. In a talk entitled "Can taking ecstasy once damage your memory?", he will reveal that ecstasy users show significantly impaired memory when compared to non-ecstasy users and that the amount of ecstasy consumed is largely irrelevant. Indeed, taking the drug even just once may cause significant short and long-term memory loss. Professor Laws findings are based on the largest analysis of memory data derived from 26 studies of 600 ecstasy users.

> (from your comment below) I took 300ug of LSD recklessly on a particularly bad day for me, in a particularly uncomfortable setting.

https://www.trippingly.net/lsd/2018/5/3/phases-of-an-lsd-tri...

Lots of details, plus dosage guide (25 ug and up) & typical experinces

https://www.reddit.com/r/LSD/comments/34acza/do_you_guys_bel...

imo 300ug is the point where you need to have some serious experience with tripping to be able to handle yourself. because if you're coming up, the acid is already circulating your bloodstream, and you get that horrible sinking sensation of thinking you've taken too much... you're in for a really bad time if you don't know how to control the trip.

I think it's difficult to say how big a dose really is until you've had a bad trip on it. only then can you see how insidious everything can get and as such just how intense 300ug can be. the reason people say not to start on doses like that is so they will AVOID those horrible experiences. so yeah, 300ug is a large dose, just because if shit goes wrong on it then you're fucked.


> Has someone that thought they were taking LSD ever turned into a permanent schizophrenic zombie or in a mental institution, or is it all urban legend. If someone that didn't know they were predisposed to mental illness, is it applicable to dismiss their experience in order to maintain how safe LSD is?

My good friend took "something" once (hard to tell what the dealer is selling you) and ended up in a mental institution, and is now in fact officially mentally disabled and on drugs for life. The drugs keep him stable enough that he's able to work, although he's still just a shadow of his former self.


What was he expecting to have taken?


Head over to the phantasytour forums, place is full of acid casualties.


The answer to this is really easy. Go to any mental institution and get to know the patients. An institution where they are allowed out, but are still in an enclosed area. A majority of the patients will have had some sort of heavy drug use in their past. Sure, whats causation and whats correlation, but its pretty clear that drugs cause mental breaks.

How do I know this?

I have mental illness in my family and have spent considerable amounts of time at those facilities.


> Sure, whats causation and whats correlation, but its pretty clear

What? How is it clear? As you wrote yourself, correlation is not causation.


I was just protecting against the inevitable counter argument. The proof is what I have seen with my own two eyes


That's unfortunate. Yes, I think it is clear too, but to so many it isn't that I have to figure out where is the truth? What circumstances cause what? Is it worthwhile to really avoid all psychonauts given the path they are on? For the ones with positive experiences on the total other side of the spectrum that take them into the spiritual mumbo jumbo, is this worth listening to? Is there a functional difference between the path the microdosers are taking and the ones that use 1000ug occassionally (ie. after smart people's 2.5 year stint at Google are they all on the path to a mental hospital?)

So many questions.

Where is the medical journal that says your conclusion "vast majority of mental health patients have a self-reported history of drug use of these specific drugs". I guess it can't exist because its crazy people self reporting a variety of substances that even the user would have no idea what was actually given to them.


> But when nobody knows what they're actually getting because it doesn't exist in a legal framework, then it muddies the whole experience.

Trust (knowing the chemist directly, indirectly, ...) in specific individuals > a largely unknown (but known to be imperfect) system, for many people anyways. Obviously this isn't practical for the not well connected, but it's all we got for now.

But as for your question, I've seen little to suggest it's anything more than war on drugs propaganda and hearsay.

https://en.m.wikipedia.org/wiki/Reefer_Madness

https://en.m.wikipedia.org/wiki/Chinese_whispers


yes this is my current predilection, but I can also concede that there are limitations in getting the truth of the matter.

even in this very thread there is someone that has been in the mental hospitals and seen problems "with their own two eyes", but is unwilling to name names as part of a code to remove any social/legal/professional consequence for themselves or the "crazy" people there


Dipping ones toes in the water is always an option, but I heard a rumour psilocybin is the safer route, due to less likelihood of illegitimate product.

If you live in a big enough city, there should be meetup groups where you could meet some people and have long discussions.

Is your interest only curiosity, or is it medical related? Sorry, on mobile and pressed for time so can't scroll through the thread..


This entertaining article lists what happened to the early LSD researchers: https://slatestarcodex.com/2016/04/28/why-were-early-psyched...


entertaining, the comments are a good addendum but are just as speculative. maybe there just isn't good information, its just more of the contradictory information.

"lasting permanent changes, obviously"

"I’ve personally seen several people experience total amnesia after tripping on high doses." No further information.

"Not lasting permanent changes"

what.

Names, sources, medical records, news reports, court cases, there has got to be something out there!


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: