“You need to be able to tell the difference between one wave front and the next, and if the next wave front is 1.3 mm behind and traveling at the speed of light, then you need to reliably distinguishing between events 4 picoseconds apart (4 trillionths of a second). So, every telescope needs a shiny new atomic clock and a really fast camera. You begin to get a sense of why the data consolidation looks more like a cargo shipment than an email attachment; trillions of snapshots every second of not just the waves you’re looking for, but also the waves that will cancel out once all the data is processed. Add to that the challenge of figuring out (generally after the fact) where every detector is to within a fraction of a millimeter even as their orientation changes and the Earth rotates, and you’ve got a problem.”
Just stupefyingly complex and amazing!
For a wavelength of 1.3 mm, we'd want the time tagging to be better than a quarter of the wavelength at least - say 0.3 mm. The speed of light is 300 mm/ns (a foot per nanosecond is the shorthand beloved of circuit and chip designers). So, for 0.3 mm, we're going to have to get down to a wavefront tagging accuracy of 0.001 ns.
No clock is going to get there, but if we can get ~close enough, we can use a procedure called fringe fitting to determine the clock corrections by looking at the wavefronts. (Does it line up this way? How about this way? How about now? Yes, it's as laborious as it sounds, but computers, eh.)
This is all in the calibration of data, before we do the Fourier inversion to create images - the magic of radio interferometry is that we can record the signal to disk while preserving phase. Optical photons can not be recorded and played back with phase preserved - optical interferometry has to split up the photon streams and send different parts to be correlated against streams from other telescopes, so you run out of signal quickly. Meanwhile, we can record radio waves at the 27 VLA dishes, say, and play them back for correlation on all 27*26/2 = 354 baselines, no problem. That's why radio VLBI is a thing, but not optical VLBI.
Even as a professional radio astronomer, the underlying physics is deep and almost magical.
Hi, long baseline optical interferometrist here who specializes in modeling and image reconstruction.
To set the record straight, long baseline optical interferometry really is a thing. At present there are two optical interferometers operating in the USA and one under construction: Georgia State University's Center for High Angular Resolution Astronomy (CHARA), and the Navy Precision Optical Interferometer (NPOI), and New Mexico Tech's Magdalena Ridge Optical Interferometer (MROI, under construction). Europe operates the Very Large Telescope Interferometer (VLTI) in Chile. Australia has the Sydney University Stellar Interferometer (SUSI). Optical interferometers have been around for a really long time. Michelson famously measured the diameter of Betelgeuse in December 1920. The first image from an optical interferometer was of Capella produced by the University of Cambridge's COAST telescope in September 1995.
The key difference between VLBI and optical interferometry is that we must combine the light from each telescope in real time, rather than recording the RF data to disk and forming the interference patterns later using correlation. Our interference patterns are recorded on high speed cameras, extracted, calibrated, and then stored as OIFITS files. These files are then later reconstructed using a variety of methods, including Markov chain processes and regularized maximum entropy.
Except for the CLEAN deconvolution process, the methods used to reconstruct images from the EHT data are identical to what optical interferometry has been doing for decades (see https://iopscience.iop.org/article/10.3847/2041-8213/ab0e85, Section 2.2.2 for references to literature). The maximum entropy process used for optical interferometric image reconstruction was, in turn, developed for MRI image reconstruction.
Don't get me wrong, I am not attempting to trivialize the result of the EHT team. The effort involved is monumental and the result is astonishing. In fact, I suspect my facial expression was very similar to Katie Bouman's now famous photo when I first saw the image. Then my jaw hit the floor when I found that some of my work (Baron, Monnier, Kloppenborg 2010) was cited in their imaging paper! However, my first inspection of the "eht-imaging" and "SMILI" repositories has yet to reveal anything new or novel that is not regularly employed by optical interferometrists.
> the magic of radio interferometry is that we can record the signal to disk while preserving phase. Optical photons can not be recorded and played back with phase preserved
Why is that for the optical photons, when it’s “deeper” than just a higher frequency as you answered elsewhere?
Black holes are small things. They're just very heavy.
The moon is about 350,000 to 400,000 km away. By my estimate (of other people's estimate), we're looking at a spot in the sky roughly the size of a dime (US 10-cent coin) on the moon.
I think my reptilian brain understands the distance of the moon and the size of a dime.
The moon is very far away, but it DOES grow and shrink as it comes closer and further away from us. Using the size differential, you can get an innate feel to the size of the moon. Technically speaking, just driving towards (or away from) the Moon will have you traveling closer / further away from it, and give you a sense of scale.
The next time you drive to a major city or large, recognizable landscape, keep an eye on how big and small mountains (or buildings) are and how quickly they move parallax to the foreground. It really does give an instinctive sense of scale. Train this instinct well enough, and you can use it on the moon.
Everybody can understand, at least the introduction. The author treats two related problems. The first is the well-known (today) integration of data from several separate antennae. The second is the problem of recovering an image of an object from the light reflected on a white wall. They are formally quite close, and the underlying math is, at many points, the same.
This gave me goosebumps. That's the kind of stuff that justifies a permanently inhabited moon-base.
The data manipulations that this EHT team did to process their raw data - is NOT acceptable from the perspective or a correct scientific experiment.
They got their images only when they allowed themselves to creatively interpret data from their telescopes
What you can do is to use methods where you [have] do not need any calibration whatsoever and you can still can get pretty good results.
So here on the bottom at the top is the truth image, and this is simulated data, as we are increasing the amount of amplitude error and you can see here ... it's hard to see ... but it breaks down once you add too much gain here. But if we use just closure quantities - we are invariant to that.
So that really, actually, been a really huge step for the project, because we had such bad gains.
They also deleted multiple critical comments from that video presentation.
E.g. "Pratik Maitra" posted multiple comments that later disappeared.
When we use side-scan sonar to create representations of the ocean floor (e.g. https://commons.wikimedia.org/wiki/File:Laevavrakk_"Aid".png), they are computationally reconstructed from the raw data which are not intrinsically recognized as pixels without reconstruction. Are these not "images"?
What is your actual contention here? Is it that any representation which is not the result of a traditional visible-light camera doesn't count as an "image"?
If so it's an irrelevant distinction to make. If not, you need to articulate in a specific and informed way why the way they reconstructed the image was wrong or could be improved.
It seems from your blog that you don't really understand what a "prior" is and why it might be useful for this kind of signal processing.
Of course the scanner (and any other measurement tool) need to be calibrated. Specifically, the scanner (and telescope) needs to be pre-calibrated based on already known samples.
In case of telescope, it needs to be precalibrated based on known images of remote stars.
Katie Bouman (the face of EHT imaging team), however, claims: "you [have] do not need any calibration whatsoever and you can still can get pretty good results"
Check it out, she actually said that: https://youtu.be/UGL_OL3OrCE?t=1180
I am surprised that only few people caught that flaw.
They spent a lot of effort ensuring that their imaging methods were objective and free of human bias.
Because they try to make an image of a black hole, their strongest bias is to see a black hole in anything.
So they should have tested if their final implementation of "imaging method" does NOT see black hole when incoming sparse data does not contain the black hole.
Unfortunately, there is no such test in the presentation.
EHT team tested that "imaging method" that was trained for recognizing a disk (without a hole) - is still able to recognize black hole. See it at [31:55]
But they did not test the reverse: train an imaging method for recognizing black hole, but then feed sparse disk data to that imaging method. Would it be able to see disk or still would see a black hole?
How about trying to feed sparse data of 2 bright stars. Would this imaging method that was trained to recognize black holes -- still be able to see these 2 stars?
Unfortunately, there was no testing like that ... or worse -- they did such testing, but then discarded the results, because it does not impress the public and financial sponsors.
But anyway, what alternative do you suggest if I disagree with the evaluation of that "discovery"?
Open public discussion is the way to go, isn't it?
If you actually believe their result is fake, then it's not like the people you need to convince are hacker news readers; you need to convince other physicists who are in a position to agree with you and do something about it.
Anyway if you go around pointing out things like "comments were deleted! they must be covering something up" you are just (rightly) written off as a conspiracy theorist.
Already done: https://dennisgorelik.dreamwidth.org/170455.html
> people you need to convince
Convincing other people is a nice side effect. The main goal it to find flaws in my own reasoning.
> convince other physicists
Why physicists, specifically?
The problem is in overly creative image interpretation. That is "information processing" domain which is quite suitable for Hacker News discussion.
I do NOT dispute physics equations that EHT team used.
> "comments were deleted!"
Suppressing critical arguments is one of important warning signs of a scam operation. For example, Theranos suppressed critical feedback too.
Suppressing critical feedback is also deeply anti-scientific.
Why should I ignore/suppress that argument?
You also pretend as if "they deleted comments" is the only argument I have.
There are plenty of red flags in what that EHT team did and I list some of them.
Convincing people is not a side effect, it is the goal of a post, or of your strong stance.
The people you need to convince are people who know this subject well. You are in the wrong place. Physicists or data analysis people, whatever, people here are not deeply informed on this, and their opinions, whichever way they go on this, would be pretty irrelevant to the truth of the matter.
Comments were almost certainly deleted for entirely different reasons than suppression of the truth. Scientists, in reality, welcome well-reasoned criticism. Bizarre and ill-argued salvos, however, may very well be ignored or deleted.
Convincing people is a side effect to me, because I am not getting paid for that.
My ability to reason right, on the other hand, helps me to make right technical and business decisions.
> people here are not deeply informed on this
You are implying that there are some scientific gods and then there are poor "us".
The reality is that there are only "us" and these scientific [and pseudo-scientific] gods are just part of us.
> Scientists, in reality, welcome well-reasoned criticism.
Exactly. That is one of the reasons why I think that what this EHT imaging team is doing is anti-scientific.
> Bizarre and ill-argued salvos
19:39 "Calibration Free Imaging"
Does it mean that you were using measurements tools (telescopes) without prior calibration?
What is bizzare about that question?
So the black hole is slowing down the light travelling towards us.... But the speed of light is a constant so... 'time' is slowed down?
So does the mean that the light is 'older' than the 54 million years it took to reach us?
I'm over 50% sure there's a flaw in my reasoning somewhere here...
2nd stupid question:
I though gravity acting on light was a 'light as particle' thing, rather than 'light as wave' thing? If that were the case gravity acting on light as particle manifesting in light as wave doesn't seem consistent.
I think the best approach is to be pragmatic and simply follow the observations and what the experiments tell us rather then force it into one paradigm or the other and complain when it doesn't make sense. This is sometimes referred to as the "Shut up and do the math" approach.
In some ballistics problems, you'll consider a flat earth (baseball problems) in some others a spherical one (spaceflight). It is about which model makes the math simpler.
This also circumvents all the wave/particle questions; it doesn't matter which conceptualisation you choose for the light, it's the space the light is moving through that is distorted.
And an interesting side note:
Do you think star tracking doesn't give good enough pointing solutions?
You also need very high data rates for producing interference fringes with 1.3 mm waves, which is probably one of the reasons why a satellite like RadioAstron works at longer wavelengths.
They either don't make accurate predictions or explanations, or they fail to account for phenomena.
Science is the internalisation of expecting, admitting, and embracing error.
Reading this write up gave me a much better appreciation for the difficulty of actually just capturing that image, which is, I'm sure, what people wanted me to focus on when seeing the photo, but that sort of context requires a write up like this, and can’t be relayed through a blurry small picture.
> Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents.
It's telling that when it's a woman getting the credit you're hanging on about teams, but when it's a man getting credit you say nothing.
Somehow people are OK when Elon Musk gets credit for Tesla and SpaceX, Steve Jobs for the iPhone but suddenly if it's a women people will dig the github accounts.
Having said that, the look of excitement and joy on Dr Bouman's face in that photo is so lovely and relatable that the pic was always destined to go viral. So in that sense you could say she was bound to become the human face of the project.
There are several similar releases every year. Finding the Higgs boson. Landing the Rosetta stone lander on a comet. Launching a new kid of a rocket.
Were you questioning people doing these releases the same way? Why not? Is it maybe because they were of the expected gender?
Before accusing people of biases it's best to examine yourself.
I look at it two ways: one, what she did was not insignificant. Whether she received more praise than someone discovering something else that happened to be a guy, not a concern that is going to cause me to lose sleep at night nor is it something she should be punished for. Two, if her story getting bubbled up inspires girls (or anyone, I suppose) to get involved in or at least more interested in the sciences, hell, I'd say we're all better for it.
Even if it is completely unnecessary nowadays, because women are no longer dismissed, individually or collectively, as not contributing to science, because women are no longer harassed, sexually or misogynistically, when they try to pursue a career in science, I still think it's fair to let people glory for awhile longer in achieving what people once thought they wouldn't, even if most of the disbelievers are centuries dead and hardly any of them are still alive and posting on the internet.
edit: I realise the post I'm responding to is sarcastic, but not everyone who thinks women are intellectually inferior to men is a troll
Just compare it, Messier 87, to ours, Sagittarius A*: https://en.wikipedia.org/wiki/List_of_most_massive_black_hol...
That "picture" is a stunning accomplishment in deception -- following the steps of Elizabeth Holmes and Bernie Madoff.
This EHT team took white noise from their telescopes, then creatively converted that white noise into one of theoretical pictures of a black hole.
And you can notice like at the bottom we get really terrible reconstruction, just cause if it fits the data very well, because you know it maybe wants to smooth out the flux as much as possible and we don't select things like that in the true data.
They simply delete image interpretation because it does not fit the theoretical image that they want to see. How convenient. They call it "Calibration Free Imaging":