
Neutrinos Faster Than Light, or FPGA artifacts? - mrb
http://blog.zorinaq.com/?e=58
======
DanielBMarkham
A nit, but this struck me as highly speculative. I, like the author, want to
see the FGPA equipment analyzed further.

I kinda felt like I was led down the garden path a bit here. In the end, the
entire essay (to me) boils down to "Stuff we don't know about could be stuff
we should worry about" This is a true statement, but I'm not sure how useful
it is to the reader.

Perhaps it would have been better to list all of the possible problems, then
show why the FGPA was the biggest concern. That way I would be more informed
about what the state of the discussion is. While it's really cool the author
can outline the myriad ways an FPGA device reading can be misread, context
here is more important than the ability to impress us with all the technical
details (which are also important, no doubt, but only with context) Put
another way, this was a bit of "nerd porn" -- lots of cool little technical
details that need to be considered when dealing with FPGAs in places like
this. For that, it's definitely great HN material.

Just a structural criticism of the essay, not the topic or author. The topic
and writing quality were awesome.

~~~
juiceandjuice
I've worked with high resolution timing devices, data acquisition systems and
NIST traceable timestamps, PMTs and APDs and FADCs. 10ns precision is a
relative cakewalk, and can be done with about $600 in equipment. Often times
some people neglect some latency introduced due to analog filtering, but it's
nowhere near the order of 60ns. Furthermore, most of these systems are almost
always calibrated daily with fiber optics, lasers, or other subatomic
particles or particle accelerators. So often times the delay is actually
relative to a particle (usually a photon) moving at or near the speed of light
in a medium with a known refractive index. I'm sure it's not too hard to find
the whitepaper for the equipment if you really wanted to look for it.

I'm not saying it's not possible, it's just extremely unlikely. The author is
asking for FPGA analysis when he should be asking for calibration analysis.

~~~
AgentConundrum
> _I'm not saying it's not possible, it's just extremely unlikely._

Which is more unlikely though, that the FPGA has an issue where precision can
be worse than 60ns, or that we're wrong about the speed of light / the ability
of matter to travel beyond that speed? We're talking about hard to believe
things here, so we shouldn't discard the possibility that this was the failure
point simply because it's unbelievable.

I know you're not discarding the idea, and I welcome the context you bring to
the discussion; I just wanted to point out that in this case there really
isn't much that should be thrown out as impossible.

~~~
sausagefeet
But saying it's extremely unlikely doesn't mean the speed of light is wrong,
it means the error could be elsewhere in the system.

------
HardyLeung
_Firstly, if this FPGA-based system is using DRAM and implements caching,
results may vary due to a single variable or data structured being in a cache
line or not, which may or may not delay a code path by up to 10-100 ns
(typical DRAM latency). This discrepancy may never be discovered in tests
because the access patterns by which an FPGA (or CPU) decides to cache data
are very dependent on the state of the system._

Clearly, the author did not know what he is talking about.

~~~
inoop
I'm not an expert on such systems. Could you to elaborate on this a bit more?

~~~
alain94040
The only design I can think of in an FPGA that would use a "DRAM cache" is a
processor. And by definition, no one assumes processor code execution to be
timing accurate to 10ns.

~~~
gvb
Lot of people use SDRAM with FPGAs.
[http://www.google.com/search?channel=fs&q=fpga+sdram&...](http://www.google.com/search?channel=fs&q=fpga+sdram&ie=utf-8&oe=utf-8)
SDRAM is used commonly with FPGAs for storing largish amounts of data at a
reasonable price (static RAM is fast, predictable, and very expensive for
large fast memory arrays).

SDRAMs are horribly complex state machine driven pipelined devices. On a
system I was working on, a simple write operation to the SDRAM could vary from
several clock cycles to 52 (IIRC) clock cycles, depending on what was going on
before the "simple" write. I forgot what the worst case stack up was, but it
was probably something about a refresh followed by a page miss that needed a
precharge...

AnandTech has a good intro... note the _simplified_ state machine diagram
(Figure 2) is _not_ simple. If they published the full state machine, your
head would explode. [http://www.anandtech.com/show/3851/everything-you-always-
wan...](http://www.anandtech.com/show/3851/everything-you-always-wanted-to-
know-about-sdram-memory-but-were-afraid-to-ask/)

Trivia: The data throughput listed by SDRAM manufacturers is the best case
sequential access fully pipelined scenario when all -nine- eight planets
align.

Speculating here: The variability of SDRAM access latency is one reason for
putting a high speed cache in front of it. For instance, the cache could be a
simple FIFO operation to buffer the data to avoid dropping data when an
operation got hit by a long latency stackup. The SDRAM may have the _average_
throughput to handle the data stream, but the (unpredictable) major bumps in
latency may required "cache" buffering to keep from dropping data.

------
perlgeek
That's the beauty of revealing lots and lots of details about the measurement
process: some "random guy on the internet"[1] can point out possible mistakes,
or at least suggest where to search.

So some people from the telecom or HPC community who have much experience with
lags and timing jitters can contribute their experitse.

[1] I don't mean to disrespect the author, just point out that he's probably
not involved with the experiment at all.

------
Havoc
>latencies of the order of 10-100 ns unexpectedly added or subtracted to a
baseline

Perhaps my stats is a bit rusty, but if its the discrepancy is +60ns (only
plus) & <20ns uncertainty, doesn't that rule out a -+100ns source (plus minus)
as the cause of the (possible) error? i.e. We should be looking for an issue
that shifts the results, not something that is adding noise.

~~~
splat
I think that the author's point is that these latencies would, in fact, add a
systematic bias to the measurements (i.e. they would shift the results, not
add noise). I interpreted "added or subtracted" to mean that this bias could
either be added at the source end or subtracted at the receiving end.

~~~
Havoc
The diagram being discussed is labeled "LNGS" so I'm pretty sure it both green
and blue relates to one end (receiving), just different parts of the receiving
end.

------
beej71
"I tried that! Don't you think I would have tried that?" --Paul Richter,
WarGames

Not that it's not a worthy thing to consider (after all, the claim is
extraordinary), but it was probably considered.

~~~
glimcat
The bug is almost always something you overlooked as too simple or obvious to
check.

------
unnivs
The observations are worth looking at. I could pull down a part of an article
about CERN's 'faster than light' claim. Here it is; "An American experiment
involving Fermilab and a Minnesota mine showed the same thing back in 2007,
but the results were within a margin of error that kept anyone from jumping up
and down about it. (The CERN results are within a margin of statistical
certainty that, were this not such an unexpected result, it would be
considered a new discovery.) Now the team plans to update that experiment with
about 10 times more data". We can't rule out the possibility of an entirely
new perspective on quantum physics. The tests are on and we can here a follow
up story soon.

~~~
jessriedel
> Now the team plans to update that experiment with about 10 times more data

FYI, more data taken by the same experimental set-up isn't going to help. This
is already a 6-sigma result, which means that a statistical fluctuation is
ruled out as an explanation (~ 1 in a million chance). Instead, there's almost
certainly a systematic error which is being made in the same way for every
data point, so taking more data doesn't do anything.

~~~
Maro
6 sigma is 1 in 506 million.

~~~
jessriedel
Ahh, thanks. I just typed "six sigma" into google and pulled the number off
the wikipedia page about that business practice. Should have gone with
Mathematica.

------
Kurtz79
Regardless of the technical details, I believe the author of the article is
making the wrong assumptions.

He is talking about errors depending on the internal state of FPGA, or "the
guy operating the device leaving the door open" leading to a change in
temperature (very unlikely, but still you have to consider everything).

The problem is that all of these possibilities would lead to different
readings on multiple trials, while we are not talking about a "one off"
experiment, but one that has been tried several number of times and the result
has been consistent.

An error might still be there, but clearly is not based on a random internal
state or outside conditions, but must be systematic.

------
ajays
This is easy to figure out. Move the detection station to, say, half the
current distance. Do the neutrinos still come out 60ns faster? Then it's a
problem with the apparatus. Do they come out only 30ns faster? Then some of
the Physics needs to be rewritten.

~~~
perlgeek
This sounds easy, but it's not.

The neutrino detector needs to be far underground (to shield it from cosmic
rays), is pretty heavy (I remember something about 60 metric tons, but it
could be more), its location needs to known very precisely, and it needs the
same sophisticated timing infrastructure as OPERA has.

I guess it would be easier to build another neutrino emitter at a different
particle accelerator, and direct the beam towards the OPERA lab (or another
existing lab).

------
lutorm
It sounds like most of the things that concern the author would introduce
substantial amounts of scatter on top of some possible systematic shift, so I
have a hard time believing they wouldn't have seen that.

------
sp332
They said that they measured the FPGA delay by shining a beam on it and seeing
how long it took to register. It seems like there is a lot of room for
variability in this practically un-analyzed system.

------
shin_lao
I don't want to sound elitist, but I really think the team double-checked all
the trivial reasons which could explain this extraordinary result (à savoir :
faster than light particles).

~~~
jarin
Probably, but think of all those times when you checked and double-checked
your code, only to find out that it was a simple typo causing the bug.

This will need to be experimentally verified (preferably with a completely
different set of equipment), but it doesn't hurt to double-check and triple-
check the simple stuff.

------
antimora
Couldn't this possibility be eliminated by calibration?

~~~
ithkuil
e.g. having a source of neutrinos 1 km away from the receiver, and see if they
still detect it 60 ms "before" (perhaps even before they fired them)

Is the cost/technology for doing this kind of quick check prohibitive?

I was wondering if the neutrino detector does have to be directed precisely
towards cern or it's omnidirectional. Does anyone know?

~~~
ars
A neutrino detector is omnidirectional. It would be almost impossible to make
it directional even if you wanted to.

You can try to measure which direction the scattered particles from a neutrino
collision travel. But the actual detector is omnidirectional.

But all that said, a strong neutrino source isn't easy to just do.

~~~
ithkuil
assuming the emission is also omnidirectional the required strength decreases
by square of distance, so it shouldn't require a strong source for 1km.

I guess it's more difficult to tell exactly when the bursts are being emitted.

~~~
ars
No, the emission is directional.

------
drallison
<http://xkcd.com/955/> applies here.

------
hackermom
Seems speculative. Haven't they measured the speed of EM, or possibly some
other subatomic particle than the neutrino, with the very same equipment, and
gotten the expected and scientifically agreed-upon speed as a result? It seems
to me that testing the measuring/timing hardware with a known phenomenon would
be a fundamental step before releasing and using it.

~~~
mikeash
As I recall, the path taken by the neutrinos in this experiment is several
hundred miles of solid rock, so, no, there really isn't any other type of
particle that could be tested with it.

~~~
gnaritas
Sure there is... dark matter particles; now if only we can find them!

~~~
eru
Aren't neutrinos part of the dark matter?

~~~
gnaritas
No.

~~~
ars
That depends on how you define dark matter. (i.e. do you mean only cold dark
matter).

Dark matter is just matter that isn't visible electromagnetically and
neutrinos do qualify.

~~~
gnaritas
While that is technically true, the context of this comment should make it
clear that I'm referring to cold dark matter, otherwise my comment about
finding them wouldn't make sense.

~~~
ars
But if this discovery is true then there is a lot we don't know about
neutrinos. For all we know they could be dark matter, and there is no other
particle.

For example if they are a type tachyon, then the closer to c they are the less
energy they contain, this would actually make them very cold.

(Personally I doubt they do exceed c, but I do think that they are actually
dark matter, only created via some process that keeps them slow.)

------
CamperBob
More info on the time-transfer system that was just released today:
<http://www.ohwr.org/projects/cngs-time-transfer/wiki/Wiki>

