Often, engineers or inventors would create something and then scientists would have to explain why it happened. In the last few generations, physicists and mathematicians would come up with theories and engineers would have to build equipment to test those theories.
The EmDrive is one of the rare modern situations where someone has engineered a device that shouldn't work according to what we know and the scientists are having to come up with the explanation.
Personally, solving a mystery is more exciting than purely intellectual theories and the EmDrive has created a very interesting mystery.
There were also a lot of false discoveries. Someone made a machine that does something, and after a few iterations it was discovered that it was only a bad measurement (or a scam). It happens also with theoretical ideas, the most famous are aether and flogisto. They were good ideas, they explained a lot of experiments, but after a time it became obvious that they were wrong.
> The EmDrive is one of the rare modern situations where someone has engineered a device that shouldn't work according to what we know and the scientists are having to come up with the explanation.
The obvious explanation is that they have an error in the measurement. Probably something get very hot and the thermal effect produce a tiny force, some experiments show that it need some time to build up and it continue a little after the current is off, like a thermal effect. Another possible explanation is that the high currents produce a magnetic field that produce a tiny force. They are putting a lot of electricity and energy in a small device, and they only measure a tiny force, it's very difficult to rule out experimental errors. A similar case are the faster than light neutrinos, that were the product of a bad measurement.
The encouraging thing about having some theory now is that it gives some insights into what to do to get a bigger effect. Lilienfeld built a field effect transistor in 1925. This was a major breakthrough, but wasn't pursued. He was using a copper/sulfur oxide on aluminum, sort of like a copper oxide rectifier, a known device at the time. That oxide apparently has some semiconducting properties. But lacking any theory, there was no clear way to make it better.
There was nothing to indicate that highly refined silicon (something nobody used back then) was the material to use instead of aluminum. Germanium diodes were known to rectify, but nobody understood why. Not until Bell Labs tried to figure out how germanium diodes worked and some semiconductor device physics was discovered was there forward progress.
Once the underlying mechanism started to be understood, progress was fast.
They did it last year and alo found some thrust signals, altough they were very careful to leave the door open to measurement errors, like Lorentz forces.
Eagleworls lab will also attemp one vacuum test in a more equipped NASA lab.
I tend to think it's all vapourware since it goes against so many well founded rules that it's more likely that it's just pathological science. The theory in the OP is also likely to be rubbish. The author of the paper was challenged many times and failed to provide sound answers. Furthermore he supports a telepathy "researcher" which is also a red flag.
You're seriously suggesting that his mathematics should be studied less seriously because he has a friend who studies telepathy?
There's something to be said for Bayesian updating, and in this case I'm willing to say that people who have the right of it scientifically are less likely to support telepathy.
Is there any data to support your theory that people who are willing to study telepathy are less mathematically able than those who are not?
Because it seems to me that people who are willing to rule out an entire area of study for social reasons are less able scientists than those who are willing to go where the data leads regardless of reputational consequences.
This EM drive is a case in point. Studies seem to suggest there is a real thing happening here, but scientists are not willing to study it because of the reputational risk. Which is clearly bad science.
> This EM drive is a case in point. Studies seem to suggest there is a real thing happening here ...
That's not what I see. Every time the experiment is done with more precision and less noise, the effect diminishes again to near the limits of experimental error. That's exactly what you would see if the force was due to thermal (or other electromagnetic) effects rather than novel engineering principles. Nothing wrong with studying it, though. But I'd be willing to bet real money it all comes to nothing.
> Is there any data to support your theory that people who are willing to study telepathy are less mathematically able than those who are not?
No, of course there is no such data. And mathematical ability is only tangentially related to critical thinking in the real world.
I'm less concerned about telepathy per se than with "science as politics". The social sciences are seeing some serious problems because there are things that scientists cannot say or talk about because of their political implications. Climate science is hampered by the problem of having to be constantly aware of the political ramifications of their research: are they "helping the deniers" if they publish a result?
If we start bringing this bullshit into hard science then we'll break that too. This is maths and physics. There's the hypothesis, there's the experiment. Do the experiment and validate the hypothesis. If the experiment isn't working, devise a better one. If the new theory leads to predictions then test those predictions. It's all objective, it can be tested, and those tests can be reproduced.
It's like free speech. I may not like the things you say, but I will absolutely support your right to say them. If you find telepathy interesting enough to study, then that's awesome. If you think you've got a result, and I'm interested enough, then I'll look at your methods and your data and work out if I agree with you or not. But dismissing you as a crank just because you're studying it is anti-scientific.
And just like free speech is being threatened by politics and there are now things that people must not say, science is becoming threatened and there are things that must not be published. This is bad imho.
Out of all the futures that seemed possible at the beginning of the Cold War, ours is no less strange and awesome than one where telepathy were a mundane experience. Being skeptical about telepathy in the style of the great Sci Fi works of the latter half of the 20th century reflects the understanding that some ideas become reality, and some just peter out.
I tried to train myself to improve strength of my look few years ago by training, and girls describe their feeling at that period as "burning skin". When I looking at them with concentration, they able to feel that at distance of up to hundred of meters. I also saw video with similar experiment recently, but I cannot find it right now.
So, IMHO, telepathy is possible physically at some short distance, but requires years of training of participants. Someone need to train himself to be strong signal emitter, which can be confirmed by measurement of body EM radiation at frequency interval of 1.3-30Hz. Then somebody else need to learn how to hear that weak EM signal, like blinds do.
I had similar feeling ("burning skin") only two times in my whole life, so, IMHO, it is very rare. However, woman told me that it is no so rare, and staring of some "hungry" mans is similar. IMHO, it is sort of "I love you", but for silent animals.
This kind of research really has been done to death, and much of the groundbreaking research has been declassified by now. For example the CIA mind control research.
I'm not convinced that this is a valid experimental procedure...
This is not a valid experimental procedure, of course. I saw video with experiment which was done properly, but I cannot find it.
Before that, I used some techniques to better control my body and feelings for years, so I was able to notice subtle differences and gradations of my feelings. I also know (poorly) body language (because I am amateur ballroom dancer), so I can notice subtle changes in behavior of others.
I feel warmth at my face when I do that, like warmth after warmup, and not like warmth when I self-heat a part of my body by concentrating at it.
It is much easier to stare at somebody, if I have feelings to her/him (e.g. when I like or dislike her/him).
It works without eye contact: from target back or when target is sleeping.
Targets always know direction from which I stared at them.
When effect is strong, it is easy notice it: it feels like something happens at skin ("burning") but without pain of any kind, i.e. nerve system sends strong signals about something ephemeral, but nothing happens in reality. When effect is weak, it feels like warmth, so it is easy to mismatch it with self-heating with concentration. So, to test effect properly, you must not say to targets that you testing them, otherwise they will concentrate at their body and will feel warmth.
IMHO, it is kind of "i like you/i dislike you" sign language used by animals and then forgotten by humans.
I had similar feeling myself, but two times only:
- in train, from girl: after about hour of staring at her (experimenting) I got exact same response. :-)
- from soldier, when we talked about how to free Crimea of Russian soldiers. :-)
It is all.
I stopped doing that because it was hard to stop looking without staring. Random people started to like me for no reason, remember me and my name for years, etc. I unlike that.
I saw a film where I was able to track at which girl operator/director is looking when filming, because of girl reactions, so I am 100% sure that I am not alone. :-)
Go ahead and try that and get back to us.
Data is crappy at the moment, but I agree, it's at least interesting in the case of the EmDrive that people are measuring something.
Actually yes, if somebody follows crank science my trust factor goes down (the Mariana trench). But that's me, if you see no problem with it, just study his paper. He also a big opponent of dark matter. But hey, I'm just a programmer, what do I know about theoretical physics.
Read this thread:
The guy who challenges him is a PHD student who happen to be usually on the right side of theories although sometimes being too hard headed.
People are still heavily debating the experiments. It has not been empirically demonstrated to a standard sufficient to call the thrust real. That's why only a very small community is working on this. "NASA" in this context means EagleWorks is letting a couple people spend a bit of spare time and resources on it as a speculative project.
Mike McCulloch's work is interesting, but it's quite far from any mainstream acceptance or testing, as he himself would admit. It's mostly independent of the EmDrive stuff, but the enthusiast community has latched onto it as a way to escape the unpleasant reality that established physics says the EmDrive is a perpetual motion machine. I'm glad EmDrive is giving his work a boost because I think it should be tested, but there should be much better ways of testing Modified inertia by a Hubble-scale Casimir effect than the EmDrive (he's got a couple blog posts about it if you're curious).
There was anomalous scaling in the observed effects in one Chinese research teams results, but it has not been repeated because full reproduction and pushing past their power levels would cost some decent money.
Seems stupid the (possibly) most groundbreaking advance in propulsion ever made, can't get a million or two when we collectively gamble more than that each day at casinos lotteries and horses races.
We could quickly eliminate the question of "is this anomalous results from bad experiment design" with a bigger experiment that would make such flaws more evident. Yet we waste time instead of money.
Science needs to embrace failure more!
Your past used to fade with time. You used to be able to move town, escape your failure, try again.
Now, any failure is forever emblazoned in shrieking tones online, and the only escape is a new name, and starting over.
It's not just science, it's everything.
Perhaps they need to set up a crowdfunding campaign to get those people betting on their experiment, gambling on a science experiment rather than a game of chance.
What's stupid is betting on stupid.
What's stupid is making investments which are known to have an expected return less than 1:1. Playing the lottery isn't stupid because we don't understand how it works; playing the lottery is stupid because we do understand how it works, we know what the expected return is, and we know that it's a worse investment than just stuffing cash under a matress. This is entirely uncontroversial, and lotteries are run by profit-making entities (either private firms or as a "tax on the ignorant" by governments) whose entire viability relies on this fact.
In contrast, things like the EmDrive are high-risk high-return investments. Their expected return is more difficult to estimate than a lottery, since we'd need to estimate the probability of it working and the expected return if it did work. However, whilst an idea like the EmDrive may be controversial, the idea that spending a small proportion of investment on ideas like the EmDrive isn't controversial. There may be arguments over how much counts as "small", which ideas should be prioritised, etc. but this just goes back to the uncertainty of estimating the expected return. It's entirely uncontroversial to say that, if it works, the EmDrive would be an incredibly lucrative technology; it's also uncontroversial to say that it's unlikely to work. The tricky part is working out which term dominates the expectation: does the big thing (the potential return) multiplied by the small thing (the probability) result in a big thing or a small thing (the expected return)?
The problem here is that you could make this same argument about almost anything, a la Pascal's Wager.
I meant "high risk" in the sense that the return has a high variance, with all of it being concentrated in a thin sliver of the probability; apologies if I've misused a technical term.
> The problem here is that you could make this same argument about almost anything, a la Pascal's Wager.
I'd call it a consequence of expected return being a widely-applicable calculation, rather than a "problem" per se. Even if we knew the expected returns, we'd still need a decision procedure to perform the allocation.
My point is that it's uncontroversial to avoid putting all funding into the most promising project (e.g. other Physics research doesn't wait until we're "finished" with the LHC), so there's certainly scope for allocating a small budget to more "fringe" research like the EmDrive. I'm not in charge of research budgets, but as a simplified argument we might imagine allocating funding on an exponential scale, based on expected return and risk: the most promising projects compete for a chunk of half the funds, the "second tier" projects for a quarter, and so on. Projects with lower impact are lower tier, projects with lower probability of success are lower tier. We stop once we've rounded-up an allocation to the smallest unit of funding, hence avoiding Pascal's Wager.
Also note that there are only a finite number of options to choose from, because there are only a finite number of submitted research proposals.
>established physics says
Restricting research capacity and dismissing empirical evidence because it doesn't jive with "established science" is exactly the opposite of what science is all about.
Modern science seems to be rapidly encroaching on religious territory: observable evidence is less important than what historical figures have said.
While every chance should be made to further explore our current assumptions (such as with the LHC), we shouldn't be neglecting low hanging fruit that challenges our ideas (such as the EMDrive). If as much as 1% of the money that is being spent on LHC went to "anomalous science" we'd likely have a conclusive answer on whether the EMDrive doesn't work. Science is tool that disproves, and we've been hacking it into a proofing tool for far too long. It's time to go back to basics and figure out more about what we don't know.
> McCulloch’s theory makes two testable predictions.
Think about the cost of either of those experiments - vanishing in the face of other science that is being performed today. If science wants to see the EMDrive go away, and is certain that it doesn't work, a comparatively small grant is all that it takes.
It has been argued that the only way science progresses is when the obstacles to science die of old age.
In what sense is it a perpetual motion machine? As far as I understand it violates the conservation of momentum, not the conservation of energy. How would a theoretical perpetual motion machine based on this effect work?
>The cone allows Unruh radiation of a certain size at the large end but only a smaller wavelength at the other end. So the inertia of photons inside the cavity must change as they bounce back and forth. And to conserve momentum, this must generate a thrust.
The new idea introduced is the quantization of inertia at small accelerations. As far as I understand it, from one end of the cone to the other there isn't a smooth change of inertia. This depends on the idea of Unruh radiation, and the reason the article gives for quantization of inertia is that as accelerations get very small, Unruh radiation wavelength becomes larger than the observable universe, forcing unruh radiation to take whole-value wavelengths (quantization). Again as far as I understand it, the inertia of photons on one end of the cone takes a different quantized value of inertia than the photons on the other end. So the thrust isn't a violation of conservation of momentum. The thrust is necessary to not violate conservation of momentum.
A nonzero photon mass makes a mess of particle physics at high energies. The Standard Model falls apart due to loss of gauge invariance, and QED becomes really obviously non-renormalizable near limits we have already tested. So this would be a big thing.
(Note that if you bite the bullet and take a nonzero photon mass (as McCullogh says in his paper at page 3: "Normally, of course, photons are not supposed to have inertial mass in this way, but here this is assumed") you can probably get a non-constant photon speed too, along the lines of neutrino oscillation. But you can always write down non-physical theories that conflict with frequently and precisely tested areas of physics...)
None of the above departs from the symmetries of Special Relativity, and one subgroup of those (invariance under spatial translation) is what implies the (local) conservation of (linear) momentum per Noether's (first) theorem.
So there's a really big looming question about why the Standard Model (which has the Poincare Group baked into it) works as well as it does everywhere but in an EmDrive or a single space probe, and resorting to special shapes of objects on scales much larger than that of atoms further conflicts with Poincare invariance.
Finally, Unruh radiation is a difference in particle count and particle energy measured by differently accelerated observers. The cosmological horizon, when it formed, produced an acceleration between observers then and observers in the future. Unruh radiation from that is pretty uncontroversial. However the problem is that the acceleration is pretty small, so the temperature will be much lower than that of the CMB. Also, you'd expect anisotropies based on Earth's (and its surrounding Local Group's) peculiar motions relative to the horizon. Why isn't there a dipole anisotropy similar to the one we see in the CMB, and if there is, by how much does it offset the inertial argument in McCulloch's paper?
The local symmetries in question are represented by the Poincare Group, which is the isometry group of Minkowski spacetime, which in turn is the unremovable background of Special Relativity. (The Lorentz Group is a subgroup of the Poincare Group).
This is another way of saying that in a 3+1 flat spacetime, well-designed local probes of fundamental physics will not depend on time translation (i.e., experimenting today vs experimenting tomorrow), on spatial translation (e.g., the same experimental results here vs there), on spatial orientation (i.e., rotation about the 3 spacelike axes; so you get the same result when you turn the system under test 90 degrees to the left), or under Lorentz boosts, which are basically instantaneous changes in constant uniform motion along any of the axes. Additionally, small-scale (small compared to several times the size and mass-energy of the whole solar system) natural phenomena are essentially always "well-designed local probes of fundamental physics".
Conservation of linear momentum arises from invariance under spacelike translation; a violation of that conservation makes it very difficult to maintain spatial translation symmetry, and in particular flies in the face of the many direct tests of physical systems at different places on our planet and in our solar system, for example. So that's a big deal that needs explaining, and in particular the explanation should preserve the known and reproducible invariance under those translations as well as allowing the EmDrive's alleged violation.
The perpetual motion machine argument is slightly different because it is rooted in a claim about how the EmDrive behaves when in non-constant motion; in the current version of the theory paper "V 9.4" at equations 14-16, there is an implied violation of the Einstein Equivalence Principle. The equation is a bit odd, and you easily can read it to say that there is a power<>thrust relationship that varies with acceleration, and raise the Einstein elevator objection: if you put the EmDrive into an upwards-accelerating box, or leave it on the ground, do you still take this power<>thrust relationship seriously? If so, you get "free power" (although that's a bit subtle). If not, then you have to explain a violation of the Equivalence Principle, which is also something that has also been very well tested and has so far applied without fail.
You could think of it with a somewhat concrete example: to hover at some fixed height above the Earth's surface, the EmDrive would have to emit thrust similar to the amount of work you would do to hold it at that height with your hands; since you are near the surface, that should be about "g" expressed as N/kg (and indeed Eq 16 tells you how much electrical power you will require for the EmDrive). So far so good. But (eq 16) can be read to say that when you place the EmDrive on the ground, it will produce electrical power proportional to to "g".
I think this is pretty clearly just a mistake rather than a serious claim. Unfortunately eq 16 features prominently in the FAQ and all the "marketing" material about the EmDrive. (Even more unfortunately it's hard to see how to fix the equation without making the drive obviously inoperable).
Given an exotic enough explanation (negative energies/faster than light particles/spacetime warping/..) it feels like you can cheat your way around almost any invariant? Occam is left weeping in a corner of course.
The problem with reaching for exotic particles or the like is that we don't see them in searches which involve tens of orders of magnitude more power, and we certainly don't see them in, for example, electric toasters or radar sets. So what would need explaining is what's peculiar about the EmDrive with respect to (extremely) unusual particle interactions, and there is [a] nothing obvious and [b] nothing at all in the "theory" paper.
Likewise, reaching for non-local physics answers raises similar questions (why is EmDrive doing non-local whatever but my kettle isn't; or alternatively, why is EmDrive coupling with whatever much much much more strongly than my conventional oven is).
I have another pair of things that you can add to the list: [a] EmDrive somehow violates causality in a way that other similar arrangements of mass-energy-momentum does not and [b] EmDrive somehow escapes logic in a way that other similar systems under study does not. These are neither more far-fetched nor more unpalatable, compared to abandoning local physics (causality and logic preserved, but hidden non local variables proliferate) or abandoning the Standard Model as an accurate low-energy theory of matter (causality, logic and locality preserved, but now what happens in the molecules and atoms of cars, computer chips, and light bulbs? we no longer can be quite so sure!).
There is A LOT of known physics and almost exactly zero examples of violations of the known invariants at low particle energies. You are re-testing relevant parts of all that in reading this comment.
Paywalled on that link though so I can't see the details which tend to be everything since last time a vacuum chamber was involved it wasn't actually evacuated.
NASA, at least, has a microwave source with the right frequency. But they report "researchers were now working on a new integrated analytical tool to help separate EmDrive thrust pulse waveform contributions from the thermal expansion interference". That indicates this thing is still way too close to the noise threshold.
Back when cold fusion was taken seriously, I went to a Stanford talk where a physicist described their attempts to replicate the experiment. At first, he said, they had the apparatus surrounded with radiation detectors and alarms, in case it produced a dangerous burst of neutrons. After a while, they realized that wasn't going to happen. They discovered that the effect being measured was about twice background radiation. Then they discovered that people moving around the apparatus affected neutron readings by more than the measured amount. (Humans are mostly water, and thus reflect neutrons.) Finally they moved the experiment to a "neutron cube" built from lead bricks, where the background radiation was very low. The measured neutron readings went way down. That's what it's like when a phenomenon is near the noise threshold.
Everybody acts to their own interests.
It is possible to "disprove" a lot of known science with those kind of experiments
The device produced positive thrusts in the positive direction and negative thrusts in the negative direction of about 20 micronewtons in a hard vacuum, consistent with the low Q factor.
Besides being tested horizontally in both directions on the torsion pendulum, the cavity was also set upwards as a "null" configuration. However, this vertical test intended to be the experimental control showed an anomalous thrust of hundreds of micronewtons that could be caused by a magnetic interaction with the power feeding lines going to and from liquid metal contacts in the setup.
Am I misreading this somehow?
That's not really noise level, that's just unknown systematic errors.
And then, on top of that, when the observed effect comports with a theory that also predicts another, well-established observed effect?
It's been said that the most important phrase you will ever hear a scientist utter isn't, "I've found it!", but rather, "Well, that's strange."
This is very much in the, "Well, that's strange" class of things. Either way, we're going to learn something about how the universe works, or about our ability to measure it, or both. This should be celebrated, not dismissed as mere "measurement error."
Yes, it requires more confirmation. Much more. I'm not a scientist, but I think that, taken together, it's interesting enough — "Well, that's strange" enough — to warrant further investigation, instead of being all, "Meh. Measurement error."
Am I mistaken in that belief?
Unruh radiation is weird under some interpretations it can be used to reduce the inertial mass of objects, the thing I take from this is that I hope the EM drive could prove it rather than the other way around because then we might say we've discovered the "Mass Effect" ;)
Then why have 6 different teams verified it? Not saying EmDrive is the real deal but either the error is hard enough to catch 6 teams missed it, or theres something else going on. Saying all these teams are wrong, to me, is far from "obvious".
With the "current" physic laws, the theoretical maximum of ForcePerPowerInput is 1/c = 0.0033 mN/kW. In that table, the ForcePerPowerInput varies from 300000x to 3x. That's a lot of variation, not and exact value that coincides with a theoretical prediction. It's a pity that the list is not ordered by date.
If you order the experiments by date and all of them have roughly the same result, they probably are measuring a well known effect were all the variables are well understood. Like measuring g with a pendulum.
If you order the experiments by date and the value increase a lot, it's perhaps a new phenomena that is still not well understood and they are tweaking the materials to get more efficiency. For example, let's consider measuring the critical temperature of a high temperature superconductor. If you pick a fixed simple superconductor, you expect to get approximately the same result in any laboratory, but small changes in the fabrication process can increase or decrease the temperature. But any time someone discover a new superconductor material or method of production, you will get a new record, so the world record will increase and the other laboratories will try to reproduce and increase it.
If you order the experiments by date and the value decrease a lot, it's possible a sign that they are fixing some experiential details and reducing the experimental errors, and they get a smaller result because the correct value is 0.
Everyone would be asking to see the code and the output for themselves before they got excited.
Wikipedia says that the experimental tests disagree on even the sign of the measured force:
"An article published by Shawyer in Acta Astronautica summarises the existing tests on the EmDrive. Of seven tests, four produced a measured force in the intended direction, and three produced thrust in the opposite direction. Furthermore, in one of the tests, thrust could be produced in either direction by varying the spring constants in the measuring apparatus. Shawyer argues that the thrust measured in the opposite direction is the reaction force from the drive, and therefore it is consistent with Newtonian mechanics."
Would it really be do surprising if similar experiments all working from the same basic design using similar components under similar conditions suffered from the same design flaw?
So your counter argument should be pretty easy to test, and I'd be surprised if this isn't accounted for in these experiments.
In the example of FTL neutrinos there was only one non repeatable instance - here we have at least three.
Most of the times skepticism is good, but sometimes it can blind you - at the very least this effect needs to be investigated further because the implications - if it turns out correct - to both practical applications and physical theory - could be huge.
It's still well within the realm of measurement error.
Speak plainly. If you're going to assert the article is false, assert that it is false.
The tl;dr for this is that Woodward proposed (based, AFAICT, on actual science) that rapid internal energy changes cause transient mass fluctuations. His device used a capacitor on a piezoelectric effector so that the effector would only push while the capacitor was charging or discharging, and only pull while the capacitor was _not_ charging or discharging, so producing an assymetric effect and, therefore, thrust.
There have been a number of attempts to verify this experimentally, all of which have been inconclusive; turns out the measuring very small forces when your test equipment is vibrating is very, very hard (see the Dean Drive).
My feeble understanding is that the underlying theory is at least plausible; it doesn't seem to violate anything we know about the universe. The last report I've seen of work on this is this (rather good) BoingBoing article from 2014:
It could be measurement error combined with a little wishful thinking, or it could be that there's some undiscovered physics there and we're tip-toeing around the edge of the parameter space where it manifests. If that's the case, all these "X effect" systems could be working according to the same principle.
There are other cases in science (e.g. early transistor efforts and superconductors) where tinkerers and engineers hit on effects that were not understood and were ignored for a long time until we had some kind of theory that explained them and much more importantly told us how we might optimize for the effect.
The McCullough prediction that a dielectric should increase the effect seems easy to test.
There's lots of unbalancing wheels, and none of them work.
This is from the perspective of a former physicist who is grumpy about the endless torrent of "Einstein was wrong!"-type articles. I personally feel like it's risky to allow people to pin their hopes on something that's pretty obviously bunk, but I imagine it does have some benefits. I just don't think it's worth it.
That doesn't state that they think it works or doesn't... but they are investing time to clarify which is which. And that to me qualifies as it being an "interesting mystery".
Such claims don't often get replicated by multiple, independent labs, as is the case here.
I'd love for this phenomenon to be real, but the Technology Review article did not present a realistic picture of the verification attempted to date.
From the IO9 article:
> The experimental setup is so flawed that it’s continuing to produce measurable “thrust” while in null mode when it should do nothing.
From the Wikipedia page on RF resonant cavity thrusters (and corroborated by the citations):
> the 'null test article', was designed without the internal slotting that the Cannae Drive's creator theorised was necessary to produce thrust
> The null test device was not intended to be the experimental control.
The article's author seems to fundamentally misunderstand the purpose of the null test setup. Setting everything else aside, if the null articles did produce thrust, this would disprove the Cannae theory (which requires the slotted configuration), but would say nothing about the efficacy of RF thrusters in general.
Their quotes from various physicists about why the drive is probably nonsense are a lot more compelling.
(To be precise, the comment about the null thruster was made by the author in a comment on this article, and by a previous article written by the author, which this one references. It is not in the article itself.)
It sounds like you think the author is asserting that the null test article was intended as an experimental control and that its production of thrust is evidence for the null hypothesis.
I read him/her as asserting that the null test article is an experimental apparatus calibration tool, and that the reading of thrust suggests the apparatus is improperly calibrated, so that no results at all can be inferred from the experiment.
There's a difference between "I don't believe X" (which allows for ambivalence) and "I believe not X" (which does not). There's also a difference between "I believe not X" and "I have no doubt of not X".
I was pointing out that there is no dispute or even doubt about the null article's inability to produce thrust (and this would have been a better phrasing). So the question is not "what does the null article's thrust imply about various hypotheses?" It's "what does the apparatus' measurement of thrust from the null article imply about the apparatus?" GP seems to have missed this point.
> "What does the apparatus' measurement of thrust from the null article imply about the apparatus?"
That makes sense, and I think it's a good question. Thanks for clarifying!
Tech Review is a PR rag, not a scientific publication.
In fact, he was right and diamond synthesis by CVD is routine today. We probably lost about a decade of progress in diamond CVD because of Derjaguin's having been tarred with polywater, as it were.
I like popular articles about dark energy. About the only thing you learn from them is "Einstein called the cosmological constant his greatest blunder!".
It was there originally, but he realized that he could not have a static universe with a nonzero CC, so he removed it, as Hubble, Lemaitre and Friedmann had not yet demonstrated the universe is non-static. When there was overwhelming evidence of what we now call the Hubble flow, he realized that a small positive cosmological constant produces exactly that, and thus he put it back in.
What you don't usually learn in those articles about dark energy is that in the Friedman-Lemaitre-Robertson-Walker model of the standard cosmology, you have an assortment of matter fields which are characterized by density and pressure.
Matter (in the most general sense of non-gravitational field content) has positive pressure, and some matter can clump (leading to non-uniform densities, and where density is higher, so is pressure; super-dense massive objects have enormous positive internal pressure).
Dark energy is in its simplest form a field with slightly negative pressure, and with constant density (i.e., it does not clump and it does not dilute away with the expansion like the matter fields do). This constant density is the "cosmological constant". Its absolute value is very small compared to the pressure even in slight overdensities of ordinary matter (like in sparse gas and dust clouds), so it's drowned out entirely by the positive pressures in structures like galaxies or stars.
Pressure and density are terms which are in the (Robertson-Walker) metric, and the metric describes the 4-lengths of spacetime intervals. Positive pressure contracts these lengths; negative pressure increases them.
So when fields with nonzero pressure are treated as the principal generators of the metric (i.e., matter and dark energy tell spacetime how to curve), you can call the result the metric expansion (or contraction) of space.
Microwave cavities have been a workhorse technology in many fields for many decades, and everyone has found that existing physics (classical electrodynamics, superconductivity, and a few other things) is entirely sufficient to explain how they work.
There's a very simple explanation for why a small group of people would report an unphysical, novel effect in a well-studied physical system: sloppy experimentation.
Not that I'm disagreeing but you could have said "Newton mechanics has been the workhorse method for centuries and everyone has found that they work" then Einstein came along and dumped the apple cart.
Not saying there is anything to the EmDrive (personally I'm on the side of experimental error) but the correct way to do science imo when you have something you can't explain is to keep at it until you can.
You couldn't have said that. Einstein's contributions were solving real problems with Newtonian mechanics, where it was clearly inadequate to explain how things actually worked. The photoelectric effect was known for years before Einstein explained it with the first glimmer of quantum mechanics. The problem of a fixed reference frame for the motion of light was known for a long time before Einstein came up with relativity.
The situations aren't really comparable. Newtonian mechanics had major known flaws that people were trying to reconcile. They weren't tiny effects hiding near the noise.
I totally agree that investigations should continue until an explanation is found, it's just that people seem far too eager to assume that it must be something new, when with what's known so far it's overwhelmingly likely to be experimental error.
That's specious. I am talking about a well-studied physical system, not a theoretical framework.
The correct analogy in this case is that someone picked a system that is well-described by Newtonian mechanics—e.g., a pendulum—and then built a small, crappy pendulum, made a crappy measurement on said crappy pendulum, and then claimed the existence of new physics despite the fact that no new physics is required to explain the behavior of much bigger and better pendula that other people already built.
The big shock was of course the way he solved it, not that there was a problem to be solved.
Physicists are undertaking a wide range of experimental programs to look for more satisfying explanations of what underlies ΛCDM cosmology (which I guess is the "conjouring" [sic] that you are referring to). Such experiments could also show that ΛCDM cosmology is wrong or needs to be modified. But right now, ΛCDM cosmology does a pretty good job of explaining the data we have so far.
I'm not sure what else you would have physicists do—should they just not talk about the fact that there's a relatively parsimonious framework that explains the large-scale behavior of the universe?
The results are random. Sometimes nonzero thrust is observed in a direction opposite to what was expected . Know what that sounds like? A null measurement dominated by statistical and systematic uncertainties.
> including NASA
I'll repeat what I said elsewhere: NASA is so big that that doesn't mean anything. Not everyone affiliated with NASA is a top-notch researcher. The word "NASA" is not automatic proof of good research.
>Some have questioned why no companies such as Boeing, Lockheed Martin, or SpaceX have attempted to investigate the device, but regardless of how likely these companies find the results so far, the largest reason is almost surely that the devices are both patented by their inventors.
If it worked, being first to market with an exclusive agreement with the patent holder would be lucrative.
I'm of the opinion that science today is too conservative and gun-shy. Science needs to be more willing to fail, and scientists who pursue cold leads should not have their careers destroyed.
An interesting and relevant example was the Dean Drive:
That's a misleading statement. I'm passingly familiar with a few of the experiments they're referring to, and none of them both produced significant results and were performed by groups which seemed un-suspect. I'm not aware of any peer reviewed paper on this stuff, and I don't personally know any non-laypeople who believe there is anything actually remarkable happening here.
 fixed some grammar
Eric W. Davis, a physicist at the Institute for Advanced Studies at Austin, noted "The experiment is quite detailed but no theoretical account for momentum violation is given by Tajmar, which will cause peer reviews and technical journal editors to reject his paper should it be submitted to any of the peer-review physics and aerospace journals."
Basically, merely having a lot of replicated experiments isn't a high enough standard--one has to have a theory of why is works. This somewhat makes sense, but kinda fails miserably for things for which we currently have no mechanism for explanation.
Imagine if empiricists were faced with the nonsense of peer-review a couple hundred years ago before they had any of the knowledge of chemistry or physics to really explain electricity. Hell, imagine how Alessandro Volta would have had trouble publishing his work today when all he had was the empirical evidence of a voltaic pile but no knowledge of the electrochemistry that made it work.
TL;DR - a small, experimental division within NASA called Eagleworks tested the device. They are a very small group with very limited funding tasked with exploring unconventional theories around advanced propulsion. Their results were not published in a peer-reviewed journal, and there is considerable disagreement as to the validity of their experiment. They are continuing to refine their experiments, and the last update provided is that they intend to publish a peer-reviewed paper describing an experiment that successfully breached 100uN of thrust, which was the target needed for JPL and others to attempt to replicate their results.
I gotta quote one line that is just cool, though:
4. A test at 50 W of power during which an
interferometer (a modified Michelson device)
was used to measure the stretching and compressing
of spacetime within the device, which produced
initial results that were consistent with an
Alcubierre drive fluctuation.
Test #4 was performed, essentially, on a whim
by the research team as they were bouncing ideas
off each other, and was entirely unexpected. They
are extremely hesitant to draw any conclusions based
on test #4, although they certainly found it interesting.
(Off topic, but along the lines of closer examination, see https://news.ycombinator.com/item?id=9020065 for how some careful (and extensive!) experimentation helped illuminate sodium's reaction with water. Not just how much work was needed to tease out the details!)
This could be complete bunkum but it's fun watching anyway as an interested bystander.
NASA is a big institution. It's a very different thing to say "NASA said XY" or "someone at NASA claimed to have found XY".
NASA is so big that that statement doesn't really mean anything.
"In this scheme there is a minimum allowed acceleration which depends on a Hubble scale Θ, so, if Θ has increased in cosmic time, there should be a positive correlation between the anomalous centripetal acceleration seen in equivalent galaxies, and their distance from us, since the more distant ones are seen further back in time when, if the universe has indeed been expanding, Θ was smaller. The mass to light ratio (M/L) does seem to increase as we look further away. The M/L ratio of the Sun is 1 by definition, for nearby stars it is 2, for galaxies’ it is 50, for galaxy pairs it is 100 and for clusters it is 300. As an aside: equation (11) could be used to model inflation, since when Θ was small in the early universe the minimum acceleration is predicted to be larger." (http://arxiv.org/pdf/astro-ph/0612599v1.pdf)
If an effect was stronger in the early universe, you'd expect to see a big correlation between the effect size in a galaxy, and that galaxy's redshift z. It wouldn't make any sense to say that "galaxies" have a ratio of 50, since there are galaxies at every redshift; many are nearby and have redshifts of almost zero, while the Ultra Deep Field galaxies have very large redshifts of up to ~10. If the number is really the same for "galaxies" in general, that means there's no distance dependence, but McCulloch doesn't seem to realize this. He seems to imply that nearby stars have a higher mass/luminosity ratio because of their distance compared to the Sun (?!), but the time-delay effect for anything in the Milky Way is negligible (< 0.0005% of the universe's age). In reality, nearby areas of space will have higher ratios than the Sun just because they contain many objects which, unlike the Sun, don't emit much light (red/brown/white dwarfs, gas and dust, etc.). Likewise, he seems to imply that "galaxy clusters" are farther away than "galaxies", but most galaxies are part of clusters, and we can observe both galaxies and galaxy clusters at both small and large redshifts.
It would be interesting to see how the theory behind the Unruh radiation works with "The quantum vacuum as the origin of the speed of light" (http://arxiv.org/abs/1302.6165#)
Or also MOND (https://en.wikipedia.org/wiki/Modified_Newtonian_dynamics) Which also has predictions based on very low accelerations.
It seems like the theories could be related.
(says an ex Nuclear physicist who's now doing computers).
I'm a complete layman here, so I don't understand how the link you shared says that the speed of light is changing; it seems more like a change in the way we define the speed of light. Is that right?
If the actual speed of light is to change, how is that possible? We've experimentally verified it up one side and down the other, so this seems like a huuuuuge discovery.
So yes, this seems like a huge discovery. Pretty much all of our understanding of relativity is based on the fact that c is, well, immutable. Changing the way we understand c means that we'll need to completely change the way we grok the last century of theoretical physics.
But there is another layer below but I am less sure about that, it may be (slightly) wrong. In quantum electro dynamics photon take, figuratively spoken, all possible paths from source to detector, also paths that require going slower and faster than the speed of light. All those possible paths interfere with each other and the net result is that the photon seems to move with the speed of light with very high probability. But really take that with a large grain of salt.
That's a story we tell kids who ask too many awkward questions. If it were true we would see sharp changes in the observed speed of light at wavelengths depending on the absorption spectrum of the material in question.
My understanding is that the real answer involves photons gaining inertia via coupling with particles in the material they are passing through (cf. Higgs) ... but I'm probably mangling that explanation horribly.
Uh, we do. The imaginary portion of the index of refraction near a resonance generally looks roughly like a gaussian, and the real portion does have jumps that look a bit like tanh.
So its speed of light in quantum foam?
The one YouTube guy discovered the beaded-chain lifting effect, and then it had to be studied to find out what was going on. Obviously that was an easily reproduced experiment.
So with this thing, we must find conclusively the unmeasured heat or ions or whatever and show a repeatable method for such mistakes. That is my opinion about science, of course I probably lost most scientists with my first sentence.
We need to find that kind of explanation for this supposed propulsion.
For this drive, what we have is an experiment that begs a theory. There's something interesting going on, and so far it eludes easy explanations. Quite possibly we'll get an innovation out of it in experimental setup, or best case real, easily verifiable thrust is detected. Chances are there won't be any new physics, but rather a very clever engineering exploit of what was already known (but not properly applied).
Enjoy the failures in science; it means we're actually trimming the dead ends carefully instead of assuming all innovation is low hanging fruit. There's a lot of bunk out there, but there's also tons of neat edge cases to map out!
 A friend's husband works in one of those labs. I forget the mechanism at play (I think it was cavitation), but it was room temperature fusion generating neutrons. It'd never be self sustaining for power, but still incredibly useful.
You are probably referring to this: https://en.wikipedia.org/wiki/Fusor
Otherwise, by analogy, on Stack Overflow should we take each new programmer's statements at face value, without seeing their actual code or error messages, unless an experienced programmer has the patience to refute it individually?
It wouldn't fly here if it was an astonishing claim about gcc backed with no specific code or output.
People don't just say 'well multiple teams have written code and gotten output that agrees with what I'm saying.' We need the actual information, not vague reports that a friend of a friend thinks there were test cases.
Likewise, the EmDrive is also being tested. So far it's pretty inconclusive and it looks likely to have a mundane explanation. But testing continues, so what exactly are you complaining about?
Anyway... I started to hear some of that tone, and I guess I assume that there really are some physicists here on HN, and so I posted what I did.
By extension, I think this is the most interesting article I've seen to date on the EmDrive: it seems to have a basis in a fairly non-controversial result of GR, which in turn is something which nicely explains an otherwise bizarre physical phenomenon. And, to top it off, there are a number of falsifiable predictions which are within our ability to test. I'm interested in whether any of my assumptions are wrong.
Most theorists think Hawking–Unruh radiation has to exist, I guess because the calculation is relatively straightforward. You just need special relativity and some basic field theory.
Hawking–Unruh radiation has never been observed in a gravitational system, and doing so would be very difficult because it requires a truly enormous acceleration in order to produce a horizon with a temperature that is an appreciable fraction of 1 K.
There are certain regimes in fluid mechanics where the mathematics governing the fluid looks like the mathematics governing relativity. Some people have produced phenomena in those systems that look like Hawking–Unruh radiation. Some people further claim that this demonstrates the existence of Hawking–Unruh radiation for gravitational systems, but I find that reasoning to be specious.
However, the theory behind Unruh radiation says one thing that creates an obvious problem for anyone trying to use it to explain something like the EmDrive or the flyby anomaly: Unruh radiation is felt by objects that are undergoing proper acceleration. That is, they have to already be experiencing thrust. And according to the theory, the thrust they are experiencing has to be very, very small, so that the wavelength of the Unruh radiation is of the same order as the size of the observable universe.
In the case of the flyby anomaly, the spacecraft passing by the Earth were in free-fall orbits--i.e., zero thrust. That means zero Unruh radiation.
In the case of the EmDrive, technically the apparatus was feeling "thrust", in the sense that it was sitting on the surface of the Earth and therefore feeling weight. (Weight counts as "thrust" in this connection.) But the weight of the EmDrive apparatus is many orders of magnitude larger than the small accelerations that would be required for the Unruh radiation explanation to work--the wavelength of Unruh radiation associated with a 1 g acceleration is about one light-year, much, much smaller than the radius of the observable universe.
So, bottom line, whether or not the EmDrive results themselves are valid (I am skeptical, but the whole thing is still being hashed out so it's too early to know for sure), it doesn't look to me like Unruh radiation can account for results of this sort.
(Btw, as gaur pointed out, Unruh radiation actually is a result of quantum field theory in flat spacetime, i.e., not GR, as the article claimed.)
A working reactionless drive isn't just extraordinary. It's utterly mind-boggling. The least interesting thing is that it's a free-energy device.
It requires breaking spatial symmetry. If it works, it's not some edge-case theoretical law that's being broken. It's the geometry of space.
The infinite energy device claim is easy enough to test. Since a working infinite energy device would be perverse, I'd predict that an EmDrive rigged up so as to produce infinite energy would fail to do so. Exactly how it fails to do so might tell us what's going on. Do the energy requirements rise with momentum (as they should in a sane universe), or do you pass a point at which the effect ceases, or does something truly wacky occur like space-time distortion in such a way as to cancel the effect?
My reading of the McCullough hypothesis is that it's pushing against all matter in the universe at once, or some such thing, but that could be way off. I'm not a physicist.
Maybe Hawking and Milner should be considering this for Starshot?
2. A lot of the "Launch a 1U Cubesat for $100k" figures are for the launch itself. That ignores other stuff like engineer wages, legal, etc. and is mostly hyperbole. Launching two for $200k is much more common as the second one take a lot less time to put together once the initial R&D is done. Then these "$100k" Cubesats, capability wise, are fairly useless. Think Sputnik type satellites. Want a working payload? Prepare to do more R&D. Oh and Cubesats have about a 50% rate of actually operating in orbit. First one doesn't work? Now you need to find more money to launch another if you weren't lucky.
3. The number of people who are willing to throw millions after something that currently isn't explained by science and is a lot harder to debug when you don't have someone sitting next to it in orbit, when it can done on the ground for an order of magnitude cheaper, is very small. Most people interested in using this are waiting for someone to pay for the testing first. Hell most satellite engineers I know for now still believe it to either be a hoax or will end up being just a measurement error. So people are riding this out until there is more hard evidence.
Disclaimer: I'm a Cubesat engineer
EDIT: and once I convince Milner to finance this boondoggle can I sign you up to be my lead satellite engineer?
Key word: if
A Falcon 9 can lift 10,000 kg to LOE (or more). Let's say we could get a simple Em-drive and basic satellite and solar panel in for under 100kg. (We can do better, but let's start there). That's 1/100 of the load, so (in theory) we could maybe pay 1/100 of the $60m cost for a Falcon 9 lift, or $600,000.
Half a million bucks plus some change, let's say.
Edit: Less if we can get on board a previously-enjoyed rocket!
I'd be astonished if the cost of this was less than $2-3M.
So not much, but the larger problem is we don't know how this drive works. Miniaturizing it could impact performance, as could other electronics in the satellite... given the unknowns it makes sense to let NASA do some more terrestrial due-diligence before going to space.
> ... (McCulloch) proposes a constant term that modifies the acceleration corresponding to the inertial mass. He says torsion balance experiments can't detect it because torsion balance experiments measure differences in acceleration. But he's wrong because since it's a constant term he "predicts", it should manifest in the Eotvos parameter. Torsion balance experiments have gone well beyond the limit to detect this. But it's irrelevant because he completely misunderstands all the theory he bases this on.
But the minimal measurement results I've seen are not compatible with the radiation pressure multiplied by some large factor for the Q of the cavity, which seems to be the claim from some. That really would violate our understanding of conservation of momentum, rather than violating our assumptions about where the momentum goes in this experiment. And that seems to be ruled out experimentally so far.
So the EmDrive glitches the universe size? This is hilarious.
Could someone explain the current thinking around how energy quanta relate to the size of the universe, or rather why wavelengths larger than the universe are impossible?
If the latter were true, I'd expect that an energy quantum corresponding to 0.99 * universe would be equally impossible as 1.01 * universe, and that only integer multiples of the corresponding frequency would be allowed (i.e. a harmonic series across the universe).
This seems unlikely, because - unless I've missed something obvious - you'd have to be able to set up a wave with a phase velocity > c.
McCulloch’s theory could help to change that, although it is hardly a mainstream idea. It makes two challenging assumptions. The first is that photons have inertial mass.
When I was taking college physics, there was a question on the exam about radiation pressure. I missed that day, so had no idea how to solve it. "a 5mW laser is reflected off a mirror (perpendicular) what is the force exerted on the mirror"? Later looking it up in the book there was a page on this and a derivation using electromagnetic theory. In the exam however, I decided to convert 1 second of laser energy to mass, bounce it off the mirror at speed=c, compute the force and change in momentum (over change in time which was 1s). I got the right answer of course.
The logic is simple. If we can convert back and forth between matter and energy, any experimental setup must obey conservation of momentum and it's CG must not move. So a laser inside a closed spaceship would actually be tranfering mass (as energy) from one end to the other. The net effect must be the same as if that mass was moved any other way.
I derived a general expression for radiation pressure after the exam and it's identical to the EM one from the book. Photons behave - and must behave - as if they have mass with a velocity of c. By the same reasoning, gravity must bend light rays, though I have not compared this prediction to that of relativity.
If that is true, why? I can think of four possible reasons:
1) Is it because the simulation was designed in such a way that it leaves its inhabitants clues that reveal the true nature of the universe? This lets them figure out the truth once their society and knowledge become sufficiently advanced.
2) That it's impossible to simulate a real universe? This would imply any sufficiently advanced society will be able to detect that they do in fact live within a simulation.
3) That the creators made a mistake and it's not a fundamental limitation. It's simply an error made by whatever civilisation created this simulation, they messed something up.
4) That it's possible to create a perfect simulation, but for some reason they decided to make a simulation that used less resources and this brings errors or the need of hacks to get it working. Now we might be exploring these hacks.
It's likely simulation would be ended if we beat its purpose by hacking it. End of the world coming?
as everything does :) I think it is quantized at all levels, it just becomes noticeable at low levels as usually.
Edit: These so called "laws" are laws in our minds and things like EmDrive show us that our minds can expand forming new "laws".
The device is under _partial_ vacuum. Under total vacuum, no rotation will occur, if the air pressure is too high, drag forces outpace the tiny thrust.