Hacker News new | past | comments | ask | show | jobs | submit login
Quantum Mechanics for Programmers (oyhus.no)
240 points by taheris on March 29, 2017 | hide | past | favorite | 74 comments



Hmm he seems to imply that MWI is the "right" interpretation, and that the measurement problem is solved. Most physicists would not agree. If you follow the link to his MWI description, there's this gem:

> But fortunately, I knew computer science, which most physicists do not know, with the Church-Turing thesis, which roughly states that anything physical can be simulated by a computer.

But that is not what the Church-Turing thesis says. A Turing machine cannot mimic a truly random physical process, pretty much by definition.

And then:

> This means that observers can be simulated by a computer.

Except that nobody really understands what kinds of physical interactions qualify as measurements -- or even if that is the right framework to use at all.


I'm a quantum physicist (well, we don't say that, I'm an atomic physicist, but for anyone not aware it is 99% quantum mechanics we do all day), and the MWI isn't universally accepted, but it's not universally rejected either. Plenty of important physicists interpret quantum mechanics that way, and I do too (I am not important though). That's not to say I'm confident it's correct, just that it's the most sensible way to understand the theory as we know it so far.

It might not completely solve the measurement problem, in that the purported derivations of the Born rule are hotly contested, but neither does any other interpretation - it gets further than most. I don't see anyone else except pilot wave theory (which violates locality) deriving the Born rule either.

"measurement is not special, and observers are just quantum systems like any other" seems like it ought to be the default assumption to make, in the absence of any evidence to the contrary, and honestly I think the reputation of MWI as a bit crazy comes about for purely historical reasons. Namely, Bohr didn't like it and he held a lot of sway.

Physicists are human, and parallel universes is pretty strange, so most people reject the idea without thinking about it in much detail.

Most physicists' opinions shouldn't count for much anyway, because the measurement problem simply doesn't come up in our day to day work, so most haven't thought about it much.

Furthermore, plenty of physicists don't actually grasp the fundamentals of their own field - they specialise a lot and can use specialised theories to get what they need done without understanding in detail where it came from.

So I really think the decoherence people, as cranky as their website looks and as oddly as they write, ought to be the ones whose opinions count on the matter.

Edit: seriously, check out their website, straight outta the nineties:

http://decoherence.de


> Furthermore, plenty of physicists don't actually grasp the fundamentals of their own field

I once met a professor at a quantum chemistry conference who argued with me that I could not converge on the exact eigenvalues of a helium atom (assuming a simplified Hamiltonian with a few Coulombic terms). He stated the oft repeated mantra that "the Schrödinger equation can't be solved for any element other than hydrogen", so I wrote a program that brute force diagonalized the Hamiltonian to demonstrate otherwise. He was confusing the notion of a closed-form solution (an analytical expression that gives the solution in terms of specified elementary functions) with that of a numerically exact solution (one that converges on the exact solution but can't be expressed in terms of specified elementary functions).

Another professor who was teaching a course on statistical mechanics once said that the single particle wave function is more fundamental than the multi-particle wave function. Nevermind the fact that his research involved density functional theory, which famously fails on those cases where the molecular wave function can't be well approximated using a product of single particle wave functions.


> Another professor […] once said that the single particle wave function is more fundamental than the multi-particle wave function. Nevermind […] his research […] where the molecular wave function can't be well approximated using a product of single particle wave functions

Depends on what you mean by "multi-particle wave function". The way it is usually understood (I think), it includes all possible tensor products of single-particle wave functions. Then it should be possible, shouldn't it?


Tensor products only describe the separable (i.e., unentangled) states.


As I understand it, that is only true when dealing with density matrices. For example, |0>⊗|0> + |1>⊗|1> is entangled and has tensor products. I think that Xcelerate is correct in saying that all combinations of the basis vectors form the basis of the multi-particle Hilbert space, as a single-particle wavefunction is just a vector/ket.


Sure, the tensor product space has a basis that is formed by the tensor products of all pairs of basis elements. But this is different from saying that any particular vector in the space is a tensor product of elements from the individual spaces.

But I'm possibly just misunderstanding what you're saying.


Fair enough, I should have been more precise: By "all possible tensor products" I actually meant all elements in the tensor product space, including all linear combinations of tensor products of single-particle states.


> Furthermore, plenty of physicists don't actually grasp the fundamentals of their own field - they specialise a lot and can use specialised theories to get what they need done without understanding in detail where it came from.

This. Sometimes physics does feel like a religion. The fundamentals (including basic assumptions and "proofs" that are complete bs in the mathematical sense) are getting reiterated time and again till people actually start believing in them. From what I've experienced myself it basically works like this: In the beginning, you come across some claim or fundamental assumption you can't follow and make a mental note to follow up on it and sit down to fully understand it. Then you get sidetracked and next time you come across said claim, you're like "Damn, I wanted to look that up!" and here you are, making a mental note again… Now somewhere between the fifth and tenth time this happens, you've forgotten about your mental note completely and have just accepted the claim as fact.

It's basically the brain tricking you into thinking the claim is correct, the more often you read about it. It's like habitual behavior you don't actually process consciously anymore.


I completely agree, and I would say that's probably the historical origin of the Copenhagen interpretation. It was clearly initially a note-to-self that something wasn't right and required figuring out, and somewhere along the line that confusion became an axiom that later generations just accepted, not getting the message that it was unfinished.

Physicists aren't dumb, but they just haven't looked into it more because it's surprisingly not relevant to the majority of work. So they're just repeating the confusion of past generations as if it's fact.

But, particularly in my field, we're getting seriously good at manipulating more and more exotic quantum states. I'm working on things involving 'weak' measurement, where you really need to know what counts as a measurement or not, and how much - you can do a partial measurement and only partly (appear to) collapse a wavefunction. These types of experiments are becoming more are more common, and it's happened a few times now that I've heard people saying "hang on, what counts as a measurement in this context?" and then having to read a decoherence textbook in order to do their atomic physics.

With these sorts of things as well as quantum computers, decoherence and the measurement problem is intruding into a larger proportion of people's work, which means repeating the mantra of the previous generation isn't going to cut it anymore, people are going to actually read decoherence textbooks and decide for themselves.


I think that MWI is pretty much like any ontology, in that you find a lot more interest in it when you're not dealing with physicists. At the end of the day, it doesn't change the math, it doesn't change much of anything, and it probably isn't testable. For me, that has all of the hallmarks of an uninteresting topic.


I'm a layman and I don't really understand much of this, but I'm really intrigued: What's the general consensus in your field on the existence of more than 3 spatial dimensions?

Seems to me, that there are a few phenomena that appear to be more or less random. Could they be perturbations caused by activity in dimensions we cannot perceive?


For first question: I'd say pop-science does a surprisingly good job of conveying the consensus on extra dimensions -- they are a totally reasonable possibility, but their effects are to be felt in realms far beyond the realms that experimental physicists can study well.

But if they exist, they probably have important (but unknown) effects. And your proposal that they explain randomness is a good example of that. But to me it doesn't feel right.

Quantum randomness somehow doesn't seem to be of that kind. Perturbations from unknown dimensions might well look like randomness, much link an RNG in a computer. But that is still masquerading as a classical random process -- it would not show the weird correlations found in QM.


I must clarify that by "activity" I don't mean directed actions (i.e. not talking about ghosts or gods) but fundamental elements — the waves and particles and fields — possessing properties and movement in more than 3 dimensions (not including time).

Like something poking through or pulling on a fabric from the other side, if such a property has, say, wavelike motion in a 4th spatial dimension, it may produce regular effects that manifest as fundamental features of our 3D space, or seem random to us if a bunch of things are bumping and deterministically interacting with each other in extra dimensions. Has this ever been considered in explaining things like [0] and [1]? Might have something to do with "dark" matter/energy too; i.e. stuff is there, just occupying other dimensions.

[0] https://en.wikipedia.org/wiki/Pair_production

[1] https://en.wikipedia.org/wiki/Quantum_foam


Thanks for weighing in.

Decoherence is great at explaining the loss of interference. It may even address the "preferred basis" problem (though there seems to be good reason to believe that it might not). But ultimately what I want to know is why I experience just one outcome, and decoherence doesn't really help here.

MWI partially resolves that ("you don't just experience one; there are many of you!"), but not in a way that satisfies me.


> But that is not what the Church-Turing thesis says. A Turing machine cannot mimic a truly random physical process, pretty much by definition.

It's not clear that a truly random process exists. There exist deterministic interpretations of QM, for instance. Certainly unpredictable processes exist, but that's an entirely different classification (whether a Turing machine halts is also unpredictable, but still deterministic).

Further, Turing machines can generate pseudo-random outputs that pass all known randomness tests.


I thought "coherence" was the model by which quantum systems spread entanglement to other systems; the larger the system the 1st system into contact with, the bigger the effect of coherence loss in the 1st system and the larger the "measurement"


Also see this piece by Steven Weinberg:

http://www.nybooks.com/articles/2017/01/19/trouble-with-quan...

"One common answer is that, in a measurement, the spin (or whatever else is measured) is put in an interaction with a macroscopic environment that jitters in an unpredictable way. For example, the environment might be the shower of photons in a beam of light that is used to observe the system, as unpredictable in practice as a shower of raindrops. Such an environment causes the superposition of different states in the wave function to break down, leading to an unpredictable result of the measurement. (This is called decoherence.) It is as if a noisy background somehow unpredictably left only one of the notes of a chord audible. But this begs the question. If the deterministic Schrödinger equation governs the changes through time not only of the spin but also of the measuring apparatus and the physicist using it, then the results of measurement should not in principle be unpredictable. So we still have to ask, how do probabilities get into quantum mechanics?"


Doesn't the probability basically follow from the uncertainty of your own eigenstate, ie. the system performing the measurement? This contextuality is why deterministic interpretations of QM also entail probabilistic measurements.


It is generally agreed (except, perhaps, by the strongest champions of the decoherence program) that decoherence does not completely solve the measurement problem.

Some good references here: http://physics.stackexchange.com/questions/295527/decoherenc...

It helps explain the loss of interference, but it does not resolve the question of why and how we see one particular outcome.


> that decoherence does not completely solve the measurement problem

It's kind of funny how the problem keeps getting pushed to higher levels of "meta":

If you consider the experimenter and his system, measurements of (non-eigenstate) quantum systems appear indeterministic to him. However, the state of [experimenter + system] is governed by an entirely deterministic equation that follows a reversible, unitary path through time. Great! But the problem is that you then have another experimenter who measures that composite system, and the outcomes he sees likewise appear indeterministic. So now you consider the system of [experimenter 2 + [experimenter 1 + system]], and we've got infinite regress — a.k.a. the measurement problem.


>So now you consider the system of [experimenter 2 + [experimenter 1 + system]], and we've got infinite regress — a.k.a. the measurement problem.

At the risk of sending things off an a huge tangent, it's interesting to see physicists recognizing that an infinite regress is, at least sometimes, unsatisfactory (even though there is of course nothing incoherent per se about the concept of an infinite sequence). Physicists usually tend to give short shrift to metaphysical arguments that rule out certain states of affairs on the grounds that they would involve an infinite regress of a problematic kind. But the logic you're using to argue against decoherence as a solution to the measurement problem is very similar to e.g. Aristotle/Aquinas's argument that the causal hierarchy must have a terminus. I'm not saying that in an attempt to start an argument about God. It's just interesting to see similar reasoning used in such different domains. (And of course in neither domain is it entirely clear that the reasoning works.)


The point is that you can prove that a thing is self-consistent without proving it is true. And I think what are calling an "infinite regress" is, in this case, self-consistency.

Proving self-consistency is a decent achievement, and is certainly error. But it is only weak evidence in favour of a position.


> And I think what are calling an "infinite regress" is, in this case, self-consistency.

I don't understand what you mean by this.


> quantum systems appear indeterministic to him. > However, the state of [experimenter + system] > is governed by an entirely deterministic equation ...

This sort of thing only makes sense in the context of many-worlds QM, and it is amusing how many professed non-many-worlders say such things.

We often describe quantum systems using an entirely deterministic (Schroedinger) equation. But we don't know in what sense that equation describes the physical state of the experimenter + system, or in what sense it is "just" a probability model.

If you choose to include the whole thing as physical reality, then you are left with all the terms in the equation -- and thus all of Everett's multiple worlds. Fine, that is a logically coherent position. But it's not the only one.


> and it is amusing how many professed non-many-worlders say such things.

I'm not saying I do or don't believe any of this (if anything, I'm interpretation-agnostic at the moment). I'm just pointing out that there is a contradiction in having one postulate demand unitary state evolution (the Schrödinger equation) for some ill-defined "system" while another postulate says that unitarity is broken at the system/environment boundary. While there's been plenty of attempts to work around this (e.g. https://arxiv.org/abs/quant-ph/0101012), I wouldn't say that anyone has formulated a consistent set of axioms that definitively resolves the issue.

> We often describe quantum systems using an entirely deterministic (Schroedinger) equation. But we don't know in what sense that equation describes the physical state of the experimenter + system, or in what sense it is "just" a probability model.

Agreed. It's certainly a useful model but it leaves out all kinds of interesting phenomena that we observe in practice (namely, relativistic and radiative effects). Curiously though, if you turn to QFT for a better probabilistic model, Haag's theorem (https://en.wikipedia.org/wiki/Haag%27s_theorem) implies that a universal Hilbert space representation cannot describe both free and interacting fields (a problem that the non-relativistic Schrödinger equation doesn't have!)


I feel like if I claimed the schrodinger equation was just a probability model, I'll be immediately lambasted because "the wave equation is reality" or then I'm immediately a "local hidden variables proponent"


Let's say I toss a fair coin; before I look at it, the outcome is indeterministic to me: there's a 50/50 chance of heads or tails. Once I look at it, I gain 1 bit of information. To another experimenter, the 'me + coin' system is indeterministic: it's either me seeing heads or me seeing tails. Once I tell that experimenter the outcome, they gain 1 bit of information. To a different experimenter, the two of us with the coin is indeterministic... and so on.

Is this scenario fundamentally different than the quantum system? Is this scenarion also a "problem"?


What you just described is a hidden variable model which cannot reproduce the behavior of entangled particles. For details look up Bell's theorem.


Oh I'm well aware of Bell's theorem; I'm a trained Physicist :)

Bell's theorem rules out theories of local hidden variables. It says nothing about non-local hidden variables, or even something more mundane like determinism (sometimes referred to as "superdeterminism").

In any case, I'm not sure Bell's theorem has much impact on my question: why is one of these things (transfer of information from a coin toss) not a problem, conceptually; whilst the other (entanglement of quantum systems) is a problem, conceptually?

(Personally, I don't find either particularly troubling; just curious to know what the philosophical distinction is, without appeals to "quantum weirdness")


A classical analogue to entanglement is not problematic for causally connected processes and in some cases it's a good model, like with human reasoning or neural networks. QM is a generalisation of bayesian reasoning that can handle non-commuting variables.

Yet entanglement is observed non-locally, even backwards in time or between degrees of freedom that never co-exist. Nobody has been able to create a non-local model that doesn't require fine tuning and the idea goes against the spirit of special relativity.

Thus, I would agree with you, there is no "problem" with entanglement. Except non-locality.


I've long thought that the measurement problem is a problem with the interface between consciousness and reality and that it's more of a psychology problem than a physics one.


ok, but I don't think that is what Xcelerate is quite saying here


> it does not resolve the question of why and how we see one particular outcome.

agreed. dechoherence doesn't explain particular outcomes. It explains the scale & magnitude of mixing quantum states from different systems.

I thought "what kinds of physical interactions qualify as measurements" was referring to a different part of understanding QM. Decoherence doesn't explain which outcome, it does explains "part" of (or place constraints upon) the mechanism of measurement process.


The problem here is that very much of what we mean by "scale and mangitude" boils down to the frequency of particular outcomes when situations are repeated.

So Decoherence only explains those things after you already have the Born rule (P ∝ |Ψ|^2). But that's the very thing we are trying to explain!


Maybe you are trying to explain the Born rule, but I was not claiming to explain that; I've already stated decoherence doesn't explain why we get certain outcomes instead of others.


It explains what we experience perfectly well. In the simple case of a 50/50 coin flip, there will be two equally real future versions of you. One will see heads, one tails.

You don't need to dive into quantum mechanics to understand this concept; instead, consider being cloned twice while you are asleep, then killing the original.

Your current self knows what will happen; you'll fall asleep and wake up as either one or the other, becoming both but never being both.


This puzzles me extremely. There are very insightful statements like

> One does not deduce them like one do in math. The physicists who actually did this stuff apparently knew this, and considered the math more akin to toying with the models to see what happens, to see if they could get better models

Which is absolutely true. Part of the problem with quantum mechanics texts is that they're almost all descended from Oppenheimer's lectures (via Schiff's book) to graduate students who already knew this, and just needed someone to brain dump the latest techniques.

But then it's mixed with really basic misunderstandings, like

> Many physicists like to believe that this makes the underlying model irrelevant; that the matrix behaviour, its eigenvalues, is the only thing that matter. You will encounter lots of this in books about Quantum Mechanics. This however is not science, because it ignores Ockhams Razor; the models shall be the simplest ones. A sparse matrix is simpler than when it is Fourier transformed, or put into atom orbitals. (I thank Eliezer Yudkowski who gave a reminder that Ockhams razor belongs here too.)

Quoting Eliezer Yudkowski is a useful heuristic for not taking someone seriously, but this quote implies that the author missed the whole point of linear algebra. And is falling into the traps described by Theorem IV in van Kampen's [Ten Theorems about Quantum Mechanical Measurements](http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=617...).


> Since you are a programmer, you do not know what science is, even though you may believe you do. A hint: Computer science does not contain science, just mathematics.

He probably wrote this half in jest, but it is actually a serious issue in fields like Natural Language Processing and Computer Vision that are have their intellectual roots in Computer Science.

Papers in CV and NLP are about data structures, algorithms, software engineering methods, neural networks, statistical models, and so on. They do not discuss anything in particular about the structure of language or the properties of images. This is because they are descended from CS. In CS, exemplar results are ideas like QuickSort and Dijkstra's algorithm. These algorithms work on any list or graph; you do not have to know anything special about the particular properties of the list or graph you are operating on.

As an illustrative anecdote, I went to an NLP talk given at MIT by a well-known Google/Berkeley parsing researcher. He gave a talk about a system that used neural networks to do sentence parsing. In the QA session, I mentioned the idea of verb argument structure, and asked how the system would learn, for example, that a verb like "persuade" or "argue" can take a that-complement, while other verbs cannot. He didn't really have an answer, because it wasn't the kind of thing that he worried or thought about. My guess is that he did not consider such a question to be relevant to his field.


I do research on Deep Learning. That seems the exact kind of question they would deem relevant to their field.

I don't know much about NLP but if I had to guess the answer to your question is include it in the training data. I don't know what model they use, but presumably it parses out sentence structure in a specific format and uses that as input to some neural network architecture.

Not really sure why he didn't answer your question, it's the exact kind of question people do NLP to answer.


It's puzzling to see the author call himself multiple times a scientist while lending so much importance to Occam's razor (which is spelled differently in the article, not sure if it's an alternative spelling in his language or a mistake).

Occam's razor is not a law. It's not a fact. It's a simple suggestion if you're looking for a starting hypothesis.

Not sure which way to start to investigate a phenomenon? Pick the simplest one and verify that one. It doesn't mean it's right, it doesn't mean it's wrong, just that it's a reasonable first guess.

But not a proof. Not a fact. Just a guess that's statically more likely to be right.


Ockham's razor is not a heuristic, it's the only principled universal prior for Bayesian reasoning. This was formalized in Solomonoff Induction.


It's not only a heuristic, it's also a principle. It states that "when choosing between two theories which make the exact same predictions, choose the one with the fewer assumptions". Which is to say, correctness comes first, of course, but when deciding between two equally correct theories, choose the one with the fewer assumptions.


Not quite accurate. "Fewest assumptions" assumes that axioms are equally comparable, but this isn't necessarily true. Obviously one should eliminate redundant assumptions, ie. ones that have no effect on observable predictions between two theories, but this provides little guidance for selecting between two theories with drastically different axioms that differ in only a small set of predictions for which we have no data.

Ockham's razor, when formalized as in Solomonoff Induction, suggests preferring theories with the lowest Kolmogorov complexity.


Ockham is arguably the proper spelling when the name isn't being rendered in Latin; Ockham is where William was from.


> It's a simple suggestion if you're looking for a starting hypothesis.

While it can be used to select priority for exploring hypotheses, it's at least as valuable as a method of choosing from among models with equal explanatory power once they've been tested and established to have equal explanatory power.


Ock/cham's razor is a heuristic, not a law. It also runs into weirdness combined with MWI. Are infinite universes the simplest explanation, or have you just literally broken Occam's razor in the most egregious way possible?


Occam is the Latinized version of Ockham, as in "William of Ockham" [1].

[1]: https://en.m.wikipedia.org/wiki/William_of_Ockham


Which must be the correct spelling, because it's shorter.


This seems just to be a model of systems evolving according to discrete difference equations, admittedly derived from quantum mechanics; it doesn't seem to have much to say about quantum mechanics per se. It reminds me of a lower-(math-)tech version of SICM (https://mitpress.mit.edu/sites/default/files/titles/content/...).


A slightly wacky article, but I do think there is lots more scope for explaining quantum physics via programming concepts. I got what little understanding I have of QM by creating a game that simulates it. I even made my own wacky article as well :) https://linkingideasblog.wordpress.com/2016/04/25/learning-q...


Why not categorical quantum mechanics?


Yeah I'm reading a nice book from Coecke and Kissinger [1] just issued and am loving it, it's the story of String Diagrams for a wider audience and with all the details fleshed out (not just hinted as in Baez TWFs). Monoidal categories, tensor networks, directed PGMs, quantum computing and even vector space NLP semantics are all particulars seen from this vantage point.

[1] Picturing Quantum Processes, ISBN 1108107710


This book looks very promising. I just bought the kindle version on amazon. Thank you.


> Here is a model of waves, as in light or sound, but not as in water waves or electron waves

How water waves are different from sound? Water wave are sound, aren't they?


The wave that he is modeling is due to having a field in space-time that will interact with nearby values of the same field in a linear way, which results in it moving at a constant rate.

The waves that you see in water are the surface representation of 3-d movements under the surface of the water. So you get effects such as the depth of the movement for a wave determines its velocity. A wave that moves a very deep water column moves very fast. (One that is a half-mile deep can move as fast as a jet!) A wave that moves a shallow water column moves slowly.

Another interesting fact about waves is that there is a significant nonlinear interaction between the depth of the water and the depth of the wave. As you come to shore this causes the wave to rise up. Surfers enjoy this effect when it comes to normal wind driven waves. But in the case of very deep and fast waves, the effect is very much like a tide unexpectedly coming in. The result is known as a tsunami or tidal wave.

These complex behaviors mean that water waves can behave very differently from light and sound.


Well, water waves can either be surface waves or pressure waves. The latter is the same mechanism as sound.


This article does not elucidate anything about Quantum Mechanics, is quite confusing, and indeed is plain wrong in several aspects. Very disappointing.


I actually find those notes particularly well suited for programmers:

http://www.cl.cam.ac.uk/teaching/1516/QuantComp/materials.ht...


I feel like this is what's wrong with the world: "In science one guesses at explanations. One does not deduce them like one do in math."

As a programmer who took quantum mechanics in college, I have to admit, I didn't understand most of it. Someone I respect once told me that if you think you understand quantum mechanics, then you don't understand any of it at all.

The mathematical portion, vector mathematics with imaginary numbers, was the only part that was interesting to me. It seemed to me that mathematical deductions were necessary because the phenomenon modeled nothing in our "reality."


This article could have been written by an algorithm. Everyone knows the meme about Quantum Mechanics being incomprehensible, like, ~"if you understand quantum mechanics, you don't understand quantum mechanics".

Quantum Mechanics requires randomness, because determinism is scary. Both probability and fate are functions of time, and time is the most interesting thing to look at. Generally, Time is ignored, or at best "accounted for".

As a programmer I think time is more interesting than particles or fields or probabilities of wave-function whatever


>Quantum Mechanics requires randomness, because determinism is scary.

Isn't the many-worlds interpretation generally regarded as deterministic?


Yes, as is de-Broglie-Bohm interpretation. Some others make no assertions about determinism.

I interpreted "because determinism is scary" as a tongue-in-cheek aside. Either way, time is not ignored: there is serious research effort into the concept of time in quantum mechanics.


IIRC, it's deterministic in a sense that isn't equivalent to the way de Broglie-Bohm is deterministic. The latter is generally what people mean by deterministic, ie. it's a classical theory with an extra term to account for quantum influences.


Both theories model the universe as being in a definite, non-probabilistic state, and that the state at one time determines the state at all future times. But yeah there is some difference in the anthropocentric aspects, i.e. how our observation of probabilities actually arises.


The more I learn about physics and math, especially with regards to quantum theory, I start to get really freaked out. The amount of "neatness" to the universe is staggering. How there's no "inbetween" at the smallest scales. Everything is discrete.

The fact that simple arrangements of symbols on a screen can perfectly describe this behavior is mind blowing. It leads me to think there's no possible way we're not living in some type of computer simulation.


Minor nitpick: We definitely do not know whether everything is discrete. There are plenty of quantum mechanical phenomena that do not have discrete spectra (you can have light of any wavelength for instance (with some caveats at the extremes of the energy scales)). We also do not have theoretical or experimental proof that space-time is discrete at the Plank length-scale - all we know is that our current theories break at that scale.

See https://physics.stackexchange.com/questions/9720/does-the-pl... (the given answer is pretty great, but beware, the author is known for being a bit hostile in his non-physics opinions)


Thanks for exposing my ignorance. I had an "aha" moment realizing that relativity makes it so that there can be no "final" level of energy.


Luboš is known for being extremely hostile in all his opinions. :)


> It leads me to think there's no possible way we're not living in some type of computer simulation.

I find this line of reasoning inadequate. In the 19th century, would you not be compelled to believe that the universe was made of mechanical pulleys, levers, and pipes because the universe operated with the perfection of a well-designed machine?

OK, so there are some phenomena that correlate (bear a resemblance to) with how computers behave. well, there are phenomena that correlate with a great many number of things; we shouldn't be surprised that computers and information theory are one of them


> How there's no "inbetween" at the smallest scales. Everything is discrete.

I think the article is a bit misleading about this. We don't know if space and time are discrete or continuous. As far as we know space and time are continuous and all the current theories treat them as continuous values.

Discretization of space and time is useful for numerical simulations, if you pick the grid small enough, but it's only a trick to do the calculations in the computer.

Anyway, it is possible that someday in the future we will discover that space-time is discrete, and it has a really tiny grid, and it has a tiny effect in the calculations. But no one is sure if this will happen in the future or not.


Hey, something I can comment on properly for once ;). Handed in my final Quantum-field-theory homework yesterday (or blood-sweat-and-theory as I called it, great fun) and finishing a MSc. in Computational science 'soon'.

From what I can follow, most of it is, unfortunately either varying degrees of wrong or just confusing as all hell. :\

Some assorted quotes:

>The electron is not in a single place, but instead spread out over all the positions, more or less. This is called "superposition".

No, the superposition principal states that individual states (contributions) can be summed in a linear fashion. I know this is not a very clear way of wording it, but it is one of those things I think is quite hard to word but very easy to understand once you see it.

(Although one could say this is a form of superposition, namely a sum of infinite delta functions in position-space. But this would be the most confusing example to use)

>The smart programmer would guess at a model containing more complex math that will get the array to model several particles, but no such thing exists.

Quantum field theory.

> These examples have used cubes with a width of 1000 voxels, and of 1000 time instants. The Universe use a width of something like 10^70 voxels. The same goes for time.

I assume this is (roughly matches) size-of-observable-universe/plank-length. But this is misinterpretation of the planck length. As far as we (I) know, spacetime is continouis. (string theorist migth disagree, I am not familiar).

Furthermore, the many particle approach is not at all reasonable. One would (usually) use lattice-QFT. where one simulates a field for each type of particle (the field can have several components) and particles are identified as excitation of this field. The most well known is lattice-QCD.

>Quantum Electro Dynamics

Quantum Electrodynamics ;)

>Richard Feynman got the nobel prize for figuring out a way of doing this. His Quantum Electro Dynamics is a sort of dynamic programming method

This part is right (I have no idea what the next few sentences are trying to say). Feyman diagrams (those fun squilly drawings) represent the results of some awful, awful integrals. The real analytical answer is integrated over two infinite spaces. But one can do a taylor expansion to get a answer which can be computed. Feynman noted that you can read off a few rules from this approximation and assign drawings to them. The answer can then be computed by summing all (topologically distsinct graphs) instead of the terms in the Taylor sum. It's much easier than it sounds (and sure as hell more fun than doing integrals), and you can copy-paste entire sections of your diagrams as long as the in and out-puts match up. perfctly suitable for dynamic programming.


There is lots of undefined behavior in the code examples, where x-1 is used as an index although x starts from 0. I guess he didn't want to talk about boundary conditions?


You can also read the Feynman lectures on computation, has some notes on quantum computing.




Applications are open for YC Winter 2024

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: