
Pioneer Anomaly Solved By 1970s Computer Graphics Technique - iuguy
http://www.technologyreview.com/blog/arxiv/26589/
======
PaulHoule
This is a classic example of how the physics community has been failing for
the last 30 years.

First of all, there's an extreme focus on papers that have come out in the
last 1.5 years, so that a lot of very interesting older work is invisible.

Secondly, physicists don't look outside the discipline, despite the fact that
we often use inferior techniques. Back in the 1990's, Mark Newman and I were
both working at Cornell and both of us were aware that the techniques
physicists were using to evaluate power law distributions were bogus. Well, I
was a timid grad student and, despite being one of the best physicists of his
generation who already had written half of an excellent textbook and had a
stellar research record, Mark was a postdoc who spent most of his two years in
absolute anguish about how he was going to find his next job.

Mark wrote a paper about this ten years later, after physicists had published
thousands of bogus papers using bogus statistics. It's a tragedy that neither
Mark, myself or some other young turk didn't write it earlier -- and it
wouldn't have been hard to do it all because it would mainly be a review paper
of what was already in the statistics literature.

~~~
dexen
_> Secondly, physicists don't look outside the discipline, (...)_

Pardon? Phong shading is a simplified model of results of a physical process
-- light reflections off certain kinds of surfaces (metallic ones, IIRC). It's
imprecise, it has its limitation etc. Contrast that with the very tiny,
precisely measured acceleration of Pioneer spacecrafts. It is not very common
to get good results from applying a coarse tool to a fine problem.

~~~
PaulHoule
A big part of physics is the art of approximation. It's almost never possible
to have a perfectly 'exact' description of a situation.

For instance, in introductory physics we have students work a number of
problems involving objects falling under the influence of gravity. Air
resistance is rarely considered, and if were to be considered, approximations
of some sort would be required, since there's no complete theory of
turbulence.

Everything that involves a complex computer simulation, say molecular dynamics
or fluid flow, involves approximations.

Even when you look at the simplest calculations involving elementary
particles, all of these are intellectually justified by the renormalization
concept that assumes that, at some very small length scale, the laws of
physics that we know will break down -- but we know that QED, QCD and such are
very good approximations.

The first virtue of an approximation is that it captures the qualitative
character of a problem, after that, it's a matter of adding an increasing
number of decimal places of quantitative accuracy.

More sophisticated models of radiation transport exist (these are critical to
the development of H-bombs, etc.) and an obvious follow-up to this paper would
be to use a better radiation transport code to validate the result.

~~~
dexen
Agreed, mostly.

 _> The first virtue of an approximation is that it captures the qualitative
character of a problem, after that, it's a matter of adding an increasing
number of decimal places of quantitative accuracy. _

Unless the approximate model diverges from the actual process in extreme
cases, such as very high, very low values.

One thing comes to my mind, if only slightly related: `Reciprocity failure'
[1], an effect in photography, where the usual model of relationship between
shutter speed (exposure time), photo material sensitivity and lightness of the
scene photographed diverges from reality for extreme values.

In such cases, increasing precision of the model isn't just a numerical task
(variable precision, iteration count, etc.).

\----

[1]
[http://en.wikipedia.org/wiki/Reciprocity_(photography)#Recip...](http://en.wikipedia.org/wiki/Reciprocity_\(photography\)#Reciprocity_failure)

------
BoppreH
Somebody better update the people using it as an anti-science argument.

Edit: I'm serious. Conservapedia used to have an article about it and how it
discredits the scientific model and whatnot. There's a huge knowledge gap
between this views and we should do our best to eliminate it.

~~~
pohl
For reference:

<http://www.conservapedia.com/Pioneer_anomaly>

~~~
JacobAldridge
Interesting - Conservapedia throws me a 403 Forbidden when I try to access
that link, and any other of their pages.

Anyone else having trouble? (Win 7, FF4, UK based ISP)

~~~
MichaelGG
Hah, hilarious. I'm outside of the US and it 403's me too. When I proxy via
the US, it's fine. Guess they don't want evildoers from those atheist
countries getting this information.

~~~
nuclear_eclipse
We just don't want all you freeloading communists stealing all of our hard-
earned capitalist bandwidth...

~~~
eru
They don't even make an exception for the Brits.

~~~
danielsoneg
As the man said…

------
Jabbles
The key conclusion, with error estimates:

 _We performed 10^4 Monte Carlo iterations, which easily ensures the
convergence of the result. The thermal acceleration estimate yielded by the
simulation for an instant 26 years after launch, with a 95% probability, is

a(t=26) = (5.8 ± 1.3) × 10^−10 ms^-2.

... These results account for between 44% and 96% of the reported value

a = (8.74 ± 1.33) × 10^−10 ms^-2

(which, we recall, was obtained under the hypothesis of a constant
acceleration) — thus giving a strong indication of the preponderant
contribution of thermal eﬀects to the Pioneer anomaly._

<http://arxiv.org/abs/1103.5222>

------
jvandonsel
It's a bit disappointing, actually. I was hoping for another revolution in
Newton's laws.

~~~
CoffeeDregs
Agreed. Against all reason, I was kinda hoping that the Pioneer Anomaly would
somehow lead us to FTL travel... Something like: "Hey, I think I figured out
the Pioneer Anom... [pause...] Ooooohhhh!"

~~~
JacobAldridge
Well, the results have yet to be verified. Maybe it still will lead to FTL
travel, and someone will solve this problem last year.

------
jarin
Doesn't it seem like a slowdown caused by infrared light emitting from one
part of Pioneer and reflecting off of another part of Pioneer is kind of like
powering a sailboat with a giant fan attached to the back of the sailboat?

I'm not a physicist by any means, but doesn't conservation of momentum apply
to photon emission and absorption/reflection as well?

~~~
dexen
The original direction of the (infrared) light is irrelevant, only the end
result matters -- the `net force', with reflections and losses factored in.
And factoring in the reflections is exactly what the article is about.

So it's not like a sailboat with a fan, more like a jet airplane with thrust
reversers [1] engaged. While the original thrust is ordinarily pointed
rearwards, it's redirected when the reversers are engaged and the `net force'
(original thrust sans losses) is pointing frontwards.

\----

[1] <http://en.wikipedia.org/wiki/Thrust_reverser>

~~~
ars
The reason it's like a jet engine is that the engine "creates" wind - by
burning fuel, the hot exhaust is the new wind. If a jet engine only moved wind
(like a turbo prop), then a thrust reverser would not work.

~~~
dexen
EDIT:

wait, what? I'm convinced that thrust reverser would (in principle at least,
construction details be damned) work with a propeller engine.

In my understanding, even if the air stream was accelerated rearwards at
first, and only subsequently redirected frontward, it'd still create net force
pointing frontward (decelerating one), without need for increased temperature
nor extra mass from burned fuel. But I don't have solid physic background;
somebody please correct me.

It may be easier to visualize it with a ducted fan. As used on some RC
aircraft models, modeled after jet-engined crafts. I believe a thrust reverser
would work in such a setup, minus friction and turbulent losses.

~~~
ars
Imagine a bent tube in a 'u' shape with a fan in the middle. Both openings of
the tube point toward the front of the plane.

That's basically what a propeller engine with a thrust reverser is. And I
think you can visualize how it will do nothing if you turn on the fan. The
force from the air exiting and entering the tube will cancel out.

~~~
eru
Do you think so? The outward stream would probably be more directed than the
inward stream, which comes in more diffuse. (Though I don't know if that
matters.)

An actual experiment would be nice.

~~~
ars
Maybe.

After writing the previous post that I realized it sounds a lot like a Feynman
sprinkler. So now I'm less sure.

~~~
dexen
A difference in kinetic energy -- E(outflow)-E(inflow) -- of air would result
in net force pointing forward (a decelerating force, if we consider a plane).
Basically, we'd need the air to get accelerated forward more than rearward,
producing net force.

Kinetic energy is E = (mV^2)/2

Obviously the mass of inflow is equal to the mas of the outflow. We can
achieve a difference in speeds if the effective cross-section of the (forward-
pointing) outlet was smaller than cross-section of the inlet.

That's _very_ off-topic, anyway :^)

~~~
eru
Don't you think that a momentum based approach would be more suitable than
energy based ones?

------
pasbesoin
What I appreciate about this is the straight-forward manner in which the story
presents the case for _always challenging your/the assumptions._

Speculation about a whole new aspect for theoretical models of the universe.
Someone's grounded enough to go over the work and realize, 'Hey, you're doing
the math wrong!'

I wish science education did a better job of teaching this.

------
copper
> Of course, other groups will want to confirm these results and a team at the
> Jet Propulsion Laboratory in Pasadena, which has gathered the data on the
> probes, is currently studying its own computer model of the thermal budgets.

Here's to hoping that the numbers match. Phong shading to calculate
numerically the effects of a what is almost a solar sail to decelerate Pioneer
- now that's _cool_ science!

~~~
dexen
It also goes to show no scientific research (if performed rigorously and
honestly) should ever be derided as being of little use. One wouldn't expect a
graphics (!) algorithm to solve a (seemingly) physical problem.

Same as one wouldn't expect some obscure branch of algebra to give birth to
modern asymmetric cryptography ;-)

~~~
hollerith
I am not persuaded. You are essentially saying that any attempt to steer
(rigorous, honest) research or research funding in the direction of maximum
usefulness is futile. Pointing out instances in which predicting the effects
of research is tricky does not get you all the way to convincing me of that.

~~~
dexen
_> (...) is futile._

A negation of `should not be derided / may turn out to be important' doesn't
read `is always unimportant'. I agree with you wholeheartedly pointed research
is important.

Anyway!

Maximum usefulness on what timescale? [1]

To judge the impact of future use, you'd pretty much have to both 1) invent
all the important uses on the spot, 2) judge the market impact of each --
including any ripple effect and any combination with other, possibly totally
unrelated at first, technologies [2]. Good luck with doing that in a
_reliable_ way.

The honesty of the research itself will be at risk, if there is any incentive
in that [3].

\----

[1] E. W. Dijkstra makes some pretty sound comments on timescale of research:
[http://www.cs.utexas.edu/users/EWD/transcriptions/EWD11xx/EW...](http://www.cs.utexas.edu/users/EWD/transcriptions/EWD11xx/EWD1175.html)

[2] a semiconductor, by itself, is a high impact thing. Computer Science (the
theory, without the machines) by itself has little impact on anything. The
both combined gives the -- unbelievably transformative -- computer :-)

[3] [http://en.wikipedia.org/wiki/Oil-
drop_experiment#Millikan.27...](http://en.wikipedia.org/wiki/Oil-
drop_experiment#Millikan.27s_experiment_and_cargo_cult_science) \-- subsequent
scientists, instead of reporting objective results of their experiments,
tended to skew the data (more or less unintentionally) towards the original
reference value (which was eventually found to be off by a good bit). The only
incentive here was consistency -- a pretty subtle one!

~~~
hollerith
>To judge the impact of future use, you'd pretty much have to both 1) invent
all the important uses, 2) judge the market impact of each

We are in disagreement here about how prediction works even after your use of
the qualifiers "pretty much" and "important" is taken into account. (I should
add that I am not saying that there is nothing to your assertion in
grandparent, just that it is not the whole story and is too pessimistic about
the human ability to predict.)

Parenthetically, I once got into a similar disagreement on Less Wrong -- with
someone who claimed that the only way to predict any aspect of the outcome of
a computer program was to run the program. So let me get your opinion on that,
and let me use Dijkstra to choose a unambiguous question to ask you:

Dijkstra claimed that a person could arrive at a high confidence that a
program has particular useful properties (e.g., that a program that keeps
tracks of balances in bank accounts obeys the "law of the conservation of
money") without ever running the program but rather by developing a proof of
the "correctness" of a program (using techniques whose development form a
large part of Dijkstra's reputation) at the same time one develops the
program. Do you disagree with Dijkstra? I'm interested in replies from others
too.

~~~
dexen
I agree with him in a limited scope :D

Certain class of programs is written with the explicit (or at least implicit)
goal of having provable properties. Like (hopefully) the program the bank uses
to track account balance.

However, a seemingly much easier problem (the Halting Problem [1]) is
undecidable in the _general_ case. One can find out at least some properties
of some programs some of the time with certainty (modulo mundane mistakes).
But one cannot find all the properties of all the programs all of the time.

Now perhaps programming isn't the best model of all human activities, but! If
we were to narrow down the discussion to research on Computer Science alone --
it is proved (via the Halting Problem), that you can't know all the outcomes
of every possible programs ahead of the time. Which means, research on some
(possibly valuable) programs can't be graded with 100% certainty ahead of the
time, because there is no way of knowing all the properties of a program in
advance.

Back to the original topic, my point was -- if somebody invested effort into
(honest, rigorous) research, it should not be derided, ever, as it may find
unexpected, valuable uses. I don't claim anything about research pointed
towards commercial (or otherwise) goals, except the general ``it's important,
too -- but it's is not the only way we should follow''.

Back to the matter of predicting: I am convinced, in general, you can't
predict some of the outcomes of research -- and some applications of the
results. Moreover, I believe that among what can't be predicted, are many of
the innovations and discoveries.

Now economic predictions (this will sell/this won't sell) are wrong some of
the time. To prevent a wrong prediction from blocking research, there is
usually pool of money for `blue-sky research', generally realized as countries
sponsoring academiae, and individuals financing research out of own pocket.

It may be very hard to estimate effects of mis-predictions on research, due to
this continuous financing -- financing that's independent of whether there is
a clear, short-term goal for the research.

EDIT: the Rice's theorem, posted by sid0 [2], is a much better example.

EDIT2: as a funny corollary to the Halting Problem, in some cases even running
the program won't give you a definite answer -- the program may go on
endlessly. Running the program is not a fool-proof solution, thus not a
general solution.

It follows your discussant was proven wrong (not completely right, to be
exact) preemptively -- by Turing's proof of undecidability of the Halting
Problem ;-)

\----

[1] <http://en.wikipedia.org/wiki/Halting_problem>

[2] <http://news.ycombinator.com/item?id=2392186>

~~~
hollerith
Thanks for the reply.

------
bhickey
Wouldn't ambient occlusion have been a better technique?

~~~
CountHackulus
Yeah, I'm not sure that Phong Shading is really the takeaway here. It seems
that it's more like they went from a local "illumination" technique to a full
global "illumination" technique to take into account secondary bounces.

~~~
lutorm
This was my impression, too.

------
dexen
There is a detailed, concise article on how the model was prepared:
[http://www.planetary.org/programs/projects/pioneer_anomaly/u...](http://www.planetary.org/programs/projects/pioneer_anomaly/update_20080519.html)

