Hacker News new | past | comments | ask | show | jobs | submit login
Why was it so hard to take a picture of a black hole? (askamathematician.com)
362 points by ColinWright 6 days ago | hide | past | web | favorite | 110 comments





Uhhhh this statement in the article about halfway down is incredible!

“You need to be able to tell the difference between one wave front and the next, and if the next wave front is 1.3 mm behind and traveling at the speed of light, then you need to reliably distinguishing between events 4 picoseconds apart (4 trillionths of a second). So, every telescope needs a shiny new atomic clock and a really fast camera. You begin to get a sense of why the data consolidation looks more like a cargo shipment than an email attachment; trillions of snapshots every second of not just the waves you’re looking for, but also the waves that will cancel out once all the data is processed. Add to that the challenge of figuring out (generally after the fact) where every detector is to within a fraction of a millimeter even as their orientation changes and the Earth rotates, and you’ve got a problem.”

Just stupefyingly complex and amazing!


Calling it a "really fast camera" elides much of the actual difficulty. We're trying to tag individual wavefronts of light at different telescopes, record them, and then play them back at a central "correlator" with the appropriate delays so that the waves come to a focus.

For a wavelength of 1.3 mm, we'd want the time tagging to be better than a quarter of the wavelength at least - say 0.3 mm. The speed of light is 300 mm/ns (a foot per nanosecond is the shorthand beloved of circuit and chip designers). So, for 0.3 mm, we're going to have to get down to a wavefront tagging accuracy of 0.001 ns.

No clock is going to get there, but if we can get ~close enough, we can use a procedure called fringe fitting to determine the clock corrections by looking at the wavefronts. (Does it line up this way? How about this way? How about now? Yes, it's as laborious as it sounds, but computers, eh.)

This is all in the calibration of data, before we do the Fourier inversion to create images - the magic of radio interferometry is that we can record the signal to disk while preserving phase. Optical photons can not be recorded and played back with phase preserved - optical interferometry has to split up the photon streams and send different parts to be correlated against streams from other telescopes, so you run out of signal quickly. Meanwhile, we can record radio waves at the 27 VLA dishes, say, and play them back for correlation on all 27*26/2 = 354 baselines, no problem. That's why radio VLBI is a thing, but not optical VLBI.

Even as a professional radio astronomer, the underlying physics is deep and almost magical.


> That's why radio VLBI is a thing, but not optical VLBI.

Hi, long baseline optical interferometrist here who specializes in modeling and image reconstruction.

To set the record straight, long baseline optical interferometry really is a thing. At present there are two optical interferometers operating in the USA and one under construction: Georgia State University's Center for High Angular Resolution Astronomy (CHARA), and the Navy Precision Optical Interferometer (NPOI), and New Mexico Tech's Magdalena Ridge Optical Interferometer (MROI, under construction). Europe operates the Very Large Telescope Interferometer (VLTI) in Chile. Australia has the Sydney University Stellar Interferometer (SUSI). Optical interferometers have been around for a really long time. Michelson famously measured the diameter of Betelgeuse in December 1920. The first image from an optical interferometer was of Capella produced by the University of Cambridge's COAST telescope in September 1995.

The key difference between VLBI and optical interferometry is that we must combine the light from each telescope in real time, rather than recording the RF data to disk and forming the interference patterns later using correlation. Our interference patterns are recorded on high speed cameras, extracted, calibrated, and then stored as OIFITS files. These files are then later reconstructed using a variety of methods, including Markov chain processes and regularized maximum entropy.

Except for the CLEAN deconvolution process, the methods used to reconstruct images from the EHT data are identical to what optical interferometry has been doing for decades (see https://iopscience.iop.org/article/10.3847/2041-8213/ab0e85, Section 2.2.2 for references to literature). The maximum entropy process used for optical interferometric image reconstruction was, in turn, developed for MRI image reconstruction.

Don't get me wrong, I am not attempting to trivialize the result of the EHT team. The effort involved is monumental and the result is astonishing. In fact, I suspect my facial expression was very similar to Katie Bouman's now famous photo when I first saw the image. Then my jaw hit the floor when I found that some of my work (Baron, Monnier, Kloppenborg 2010) was cited in their imaging paper! However, my first inspection of the "eht-imaging" and "SMILI" repositories has yet to reveal anything new or novel that is not regularly employed by optical interferometrists.


Shannon says you have to record at twice the frequency of the signal to see the signal. What makes you guys double that again?

Because you'd also like to know the phase of the waves. If you get unlucky and get just DC offset of the waves (think of it as the 0 crossing of a sine wave) then you've no idea what the phase is. The peak could be before the 0 crossing and the minimum could be afterwards, or it could be the opposite (pi offset in phase). Granted, this is unlikely to occur, you're much more likely to not run into these scenarios. Ideally, you want 4 pieces of data per wave, the 0 crossings and the max/min, for each wave. From that you can get the amplitude and the phase for certain (again, luck is involved)

Can you not make a coherent quadrature detector? I was under the impression that those were pretty common in optical communication systems. Edit: nevermind, 230GHz, so millimeter wave but mixers and oscillators exist at that frequency, are the noise figures just too high to use?

Luck isn't involved as long as you can ensure your sampling noise is uncorrelated with the incoming wave, either by direct insertion of dither or characterization of environmental noise. Regardless, your point still stands, just a fun tidbit.

Thanks!

> the magic of radio interferometry is that we can record the signal to disk while preserving phase. Optical photons can not be recorded and played back with phase preserved

Why is that for the optical photons, when it’s “deeper” than just a higher frequency as you answered elsewhere?


I can’t speak to any physical limitations that this poster seems to be speaking to (I only studied physics theory so I’m not too sharp on the details of lab / experimental devices / apparatuses) but I would guess intuitively that a fundamental limitation is that radio tends to be more coherent, so you aren’t relying on individual photons but rather aggregating a bunch of photons to measure a wave. In contrast, optical light tends to be incoherent (unless from a laser), so you have to measure individual photons and they aren’t really correlated with each other (so interferometry doesn’t work).

Can you still do VLBI if you downconvert the signal or do you do direct sampling (or something else lol)

Also, you are looking at a piece of the sky the size of 5 atoms at armslength.

Thanks for this analogy, it's far more enlightening than the human hair version.

Interesting. I have precisely zero intuition for the scale of five atoms!

How many football fields is that?

The size of a football 27,000,000 football fields away.

Let’s just round to 0.

Smaller than kickoff dot.

American or European?

Or the thickness of a human hair at 500km (from the article)

Yes, but I did not use that one as I don't like that visualization. My reptilian brain made that sound way bigger than it actually is. As the horizon is only 5-10km away, I cannot really estimate a distance of 500km. I can see the problem of measuring 5 atoms height at arms length though, even when I cannot imagine the size of an atom.

Black holes are small things. They're just very heavy.


Messier 87* has a Schwarzschild radius of ~17.784 light hours. The scale of the universe is enough that the mother of my ex had a panic attack when watching a YouTube video about things much smaller than that — hypergiant stars.

Funny enough: two of my lay friends had panic attacks when i described to them the distance to the nearest black hole.

What was the cause of the panic attack?

From what I understood, the realisation that the Earth, that everything and everyone she’d ever known, was an invisible speck next to an invisible speck compared to the largest star.

Ahh yes, the Total Perspective Vortex.

Some people don't react well to having their ego obliterated.

Probably the innate fear of infinity

The ISS is 'about' 500km away, that makes it fairly easy to visualise for me.

That's a good point. Perhaps we need to just use things that are well understood.

The moon is about 350,000 to 400,000 km away. By my estimate (of other people's estimate), we're looking at a spot in the sky roughly the size of a dime (US 10-cent coin) on the moon.

I think my reptilian brain understands the distance of the moon and the size of a dime.


Hmm. My inner eye can’t really get a handle on the distance to the moon. I know it’s 33 Earth-diameters away, that it’s 400 times further than my longest cycle ride, but my mind somehow keeps shortening every distance longer than I can go in a day under my own power to similar levels of “quite far”.

You can get a sense of scale from mountain landscapes or city skylines. As you get closer to the city, you get a better sense of scale and size based off of how much the city has grown or shrunk.

The moon is very far away, but it DOES grow and shrink as it comes closer and further away from us. Using the size differential, you can get an innate feel to the size of the moon. Technically speaking, just driving towards (or away from) the Moon will have you traveling closer / further away from it, and give you a sense of scale.

https://en.wikipedia.org/wiki/Lunar_distance_(astronomy)#/me...

The next time you drive to a major city or large, recognizable landscape, keep an eye on how big and small mountains (or buildings) are and how quickly they move parallax to the foreground. It really does give an instinctive sense of scale. Train this instinct well enough, and you can use it on the moon.


The trouble with driving (or trains) is that my unthinking processing treats the speed as constant and shrinks the distances. Even cycling 1080 km along the Rhine, from the North Sea to the Swiss not-quite-Alps had that foreshortening, though to a lower degree.

Black holes are normally small things. The furthest one out of these is larger than our solar system, however.

Or an orange on the moon (from a youtube video IIRC?)

The specific mention of the box of hard drives in the press release, was because there are no flights in/out of South Pole (where the 10m South Pole Telescope is) for ~9 months in winter, and the Internet access there is very expensive and not very high bandwidth.

This is why you aren't likely to be doing VLBI in the visible light spectrum any time soon. Your wavelength is three orders of magnitude smaller, so the equipment you use to capture phase information has to be three orders of magnitude faster and the correlation is likely to require at at least three orders of magnitude more capacity.

Yes, we're not going to do optical VLBI soon, but the reason is deeper... I just wrote out a long comment [1].

[1] https://news.ycombinator.com/item?id=19674563


Is there any information on the tech behind this? As a software developer this is horrifying and fascinating

... and the tectonic plates move, and that fucking delivery truck for the people down the road keeps going past...

Any similarities with an electron microscope?

Bouman's 2014 PhD thesis is refreshingly readable:

https://people.csail.mit.edu/klbouman/pw/papers_and_presenta...

Everybody can understand, at least the introduction. The author treats two related problems. The first is the well-known (today) integration of data from several separate antennae. The second is the problem of recovering an image of an object from the light reflected on a white wall. They are formally quite close, and the underlying math is, at many points, the same.


Previous HN discussion about seeing round corners: https://news.ycombinator.com/item?id=17902177

[flagged]


Did the GP comment get edited? I don't see how your comment is relevant.

For me it still says: "The author [did] integration of data from several separate antennae."

You can't just elide the verb there... The comment says the author "treats two related problems" in the paper, one of them being the integration of data. Is that not true? The paper certainly covers that topic.

> Once we set up an array of space telescopes throughout cislunar space (the volume inside the Moon’s orbit) we’ll get pictures of the SMBs in the cores of every nearby galaxy and that’s when the science really gets started.

This gave me goosebumps. That's the kind of stuff that justifies a permanently inhabited moon-base.


How? It justifies telescopes on Moon, not people on Moon.

It justifies telescopes on the really high orbit, but not on the Moon: unless telescope is manufactured on the Moon, logistics of flying large telescope there and safely landing it are so complex, that it’s way easier to leave in Earth’s orbit and get same results from there.

Stationkeeping within a fraction of a millimeter is much easier when your instrument is on solid ground. It might simplify the platform to the point where descent into the lunar gravity well is worthwhile.

Building on the surface destroys the entire benefit of the project. The moon isn’t large enough to give you the effective ape rather size you need - if it was you’d be fine on earth.

The point would be to combine this with the earth-based telescopes to create a 400,000km aperture.

If you took the straw from a Big Gulp (1/4 inch in diameter) and made the straw 20,000 miles long, then looked at the sky through that straw, the patch of sky you'd see would be the size of the shadow of the M87 black hole.

Which is 6.5mm / 32,000km approximately.

The real story is that my childhood was filled with lies, and every "photo" of a black hole I saw in gradeschool science books were really artist renderings.

That story is still true today: we only have artist renderings of the black holes and do NOT have real pictures.

The data manipulations that this EHT team did to process their raw data - is NOT acceptable from the perspective or a correct scientific experiment.

They got their images only when they allowed themselves to creatively interpret data from their telescopes

~~~~~~~

https://youtu.be/UGL_OL3OrCE?t=1177

19:37

What you can do is to use methods where you [have] do not need any calibration whatsoever and you can still can get pretty good results.

So here on the bottom at the top is the truth image, and this is simulated data, as we are increasing the amount of amplitude error and you can see here ... it's hard to see ... but it breaks down once you add too much gain here. But if we use just closure quantities - we are invariant to that. So that really, actually, been a really huge step for the project, because we had such bad gains.

~~~~~~~

They also deleted multiple critical comments from that video presentation.

E.g. "Pratik Maitra" posted multiple comments that later disappeared.


Do you think the fact that the CT scanner at your local hospital needs to be calibrated and computationally reconstructed from X-ray intensities mean it does not result in an "image"?

When we use side-scan sonar to create representations of the ocean floor (e.g. https://commons.wikimedia.org/wiki/File:Laevavrakk_"Aid".png), they are computationally reconstructed from the raw data which are not intrinsically recognized as pixels without reconstruction. Are these not "images"?

What is your actual contention here? Is it that any representation which is not the result of a traditional visible-light camera doesn't count as an "image"?

If so it's an irrelevant distinction to make. If not, you need to articulate in a specific and informed way why the way they reconstructed the image was wrong or could be improved.

It seems from your blog that you don't really understand what a "prior" is and why it might be useful for this kind of signal processing.


> CT scanner at your local hospital needs to be calibrated

Of course the scanner (and any other measurement tool) need to be calibrated. Specifically, the scanner (and telescope) needs to be pre-calibrated based on already known samples.

In case of telescope, it needs to be precalibrated based on known images of remote stars.

Katie Bouman (the face of EHT imaging team), however, claims: "you [have] do not need any calibration whatsoever and you can still can get pretty good results"

Check it out, she actually said that: https://youtu.be/UGL_OL3OrCE?t=1180

I am surprised that only few people caught that flaw.


She literally addresses this in the video you linked: https://youtu.be/UGL_OL3OrCE?t=1757

They spent a lot of effort ensuring that their imaging methods were objective and free of human bias.


EHT team tested for some biases, but did not test for the most significant bias.

Because they try to make an image of a black hole, their strongest bias is to see a black hole in anything.

So they should have tested if their final implementation of "imaging method" does NOT see black hole when incoming sparse data does not contain the black hole.

Unfortunately, there is no such test in the presentation.

EHT team tested that "imaging method" that was trained for recognizing a disk (without a hole) - is still able to recognize black hole. See it at [31:55]

https://youtu.be/UGL_OL3OrCE?t=1916

But they did not test the reverse: train an imaging method for recognizing black hole, but then feed sparse disk data to that imaging method. Would it be able to see disk or still would see a black hole?

How about trying to feed sparse data of 2 bright stars. Would this imaging method that was trained to recognize black holes -- still be able to see these 2 stars?

Unfortunately, there was no testing like that ... or worse -- they did such testing, but then discarded the results, because it does not impress the public and financial sponsors.


You just sound like a loon if you take your stance against them in every thread on a random tech forum.

"Every thread" is an obvious exaggeration.

But anyway, what alternative do you suggest if I disagree with the evaluation of that "discovery"?

Open public discussion is the way to go, isn't it?


Write a paper or blog post that convincingly makes your case and shows that you deeply understand their approach so are qualified to criticize its flaws.

If you actually believe their result is fake, then it's not like the people you need to convince are hacker news readers; you need to convince other physicists who are in a position to agree with you and do something about it.

Anyway if you go around pointing out things like "comments were deleted! they must be covering something up" you are just (rightly) written off as a conspiracy theorist.


> or blog post

Already done: https://dennisgorelik.dreamwidth.org/170455.html

> people you need to convince

Convincing other people is a nice side effect. The main goal it to find flaws in my own reasoning.

> convince other physicists

Why physicists, specifically?

The problem is in overly creative image interpretation. That is "information processing" domain which is quite suitable for Hacker News discussion.

I do NOT dispute physics equations that EHT team used.

> "comments were deleted!"

Suppressing critical arguments is one of important warning signs of a scam operation. For example, Theranos suppressed critical feedback too.

Suppressing critical feedback is also deeply anti-scientific.

Why should I ignore/suppress that argument?

You also pretend as if "they deleted comments" is the only argument I have.

There are plenty of red flags in what that EHT team did and I list some of them.


Writing a blog post that you do not succeed in publicizing is the same as not writing a blog post.

Convincing people is not a side effect, it is the goal of a post, or of your strong stance.

The people you need to convince are people who know this subject well. You are in the wrong place. Physicists or data analysis people, whatever, people here are not deeply informed on this, and their opinions, whichever way they go on this, would be pretty irrelevant to the truth of the matter.

Comments were almost certainly deleted for entirely different reasons than suppression of the truth. Scientists, in reality, welcome well-reasoned criticism. Bizarre and ill-argued salvos, however, may very well be ignored or deleted.


> Convincing people is not a side effect, it is the goal of a post, or of your strong stance.

Convincing people is a side effect to me, because I am not getting paid for that.

My ability to reason right, on the other hand, helps me to make right technical and business decisions.

> people here are not deeply informed on this

You are implying that there are some scientific gods and then there are poor "us".

The reality is that there are only "us" and these scientific [and pseudo-scientific] gods are just part of us.

> Scientists, in reality, welcome well-reasoned criticism.

Exactly. That is one of the reasons why I think that what this EHT imaging team is doing is anti-scientific.

> Bizarre and ill-argued salvos

I asked:

~~~~~~~~~

19:39 "Calibration Free Imaging" Does it mean that you were using measurements tools (telescopes) without prior calibration?

~~~~~~~~~

What is bizzare about that question?


A very good explanation on what we are looking at: https://www.youtube.com/watch?v=zUyH3XhpLTo

So its kind of touched on in the lensing part, but gravity acts on light.

So the black hole is slowing down the light travelling towards us.... But the speed of light is a constant so... 'time' is slowed down?

So does the mean that the light is 'older' than the 54 million years it took to reach us?

I'm over 50% sure there's a flaw in my reasoning somewhere here...


The light is redshifted when it "climbs out" of a strong gravity field but still moves through space at a constant speed. https://en.m.wikipedia.org/wiki/Gravitational_redshift

Ok that makes sense.

2nd stupid question: I though gravity acting on light was a 'light as particle' thing, rather than 'light as wave' thing? If that were the case gravity acting on light as particle manifesting in light as wave doesn't seem consistent.


I think most of the time it's best to think in terms of waves, but every now and then they exhibit particle-like behaviour. Any such attempt at simplification hits problems though.

I think the best approach is to be pragmatic and simply follow the observations and what the experiments tell us rather then force it into one paradigm or the other and complain when it doesn't make sense. This is sometimes referred to as the "Shut up and do the math" approach.


A photon does not switch from particle behavior to wave behavior. Physicists do. These are two different models for the same thing.

In some ballistics problems, you'll consider a flat earth (baseball problems) in some others a spherical one (spaceflight). It is about which model makes the math simpler.


I'd say physicists are pretty constant in their coffee seeking behaviour (the dominant operation mode).

A photon being a particle or wave is mainly about pedagogical convenience, it is neither but has properties of both.

Depending on your metaphilosophical models, you might also say that both waves and particles have light-like properties.

The equivalent of 'slowing down' for light is stretching out (increasing) the wavelength. It reduces it's energy in the same way that gravitic deceleration reduces an object's kinetic energy, but without affecting it's 'speed'.

The model of gravity as distorting 4D spacetime helps this make sense; just as atomic clocks run slower in Earth's gravity well, an atomic clock placed near a black hole would also run much slower. Similarly light leaving the black hole has to move through curved spacetime to get out of the gravity well.

This also circumvents all the wave/particle questions; it doesn't matter which conceptualisation you choose for the light, it's the space the light is moving through that is distorted.

http://blogs.discovermagazine.com/d-brief/2018/11/28/atomic-...


I think the answer is yes, and it’s called the Shapiro time delay effect, but I don’t trust my comprehension of anything about GR: https://en.m.wikipedia.org/wiki/Shapiro_time_delay

Also, not to confuse you more, but for light, time actually stands still since it is moving at the speed of light.

Here is Katie Bouman, one of the EHT team members, explaining it in excruciating detail:

https://www.youtube.com/watch?v=UGL_OL3OrCE

And an interesting side note:

https://www.vox.com/science-and-health/2019/4/16/18311194/bl...


So if it's a matter of increasing the effective aperture, does that mean we can launch some spaceships and do the same trick with an arbitrarily large interferometer?

Yes, in theory. The challenges are likely to be dish size and getting accurate enough positioning. Not to mention the downlink capacity required (and also onboard storage).

What I know of space VLBI is that it exists, and in the one first hand account it didn't give extra detail.

Do you think star tracking doesn't give good enough pointing solutions?


It's not just pointing (2D) but also getting the baseline between telescopes. This is also being investigated for the LISA system https://en.m.wikipedia.org/wiki/Laser_Interferometer_Space_A...

Well, each antenna needs to be sensitive enough to pick up the signals at all.

You also need very high data rates for producing interference fringes with 1.3 mm waves, which is probably one of the reasons why a satellite like RadioAstron works at longer wavelengths.


There is a gravity wave project that wants to detect gravity waves though satellites. Probably a similar idea. I think the name is “LISA”.

He makes an interesting statement: "Science advances when we’re wrong or surprised or both."

Theoretical advances require failures of existing worldviews.

They either don't make accurate predictions or explanations, or they fail to account for phenomena.

Science is the internalisation of expecting, admitting, and embracing error.


If you'd like to skip the very lengthy introduction, search for "The reason this hasn’t be done before".

I’ll admit: I wasn’t a person who was wowed by the picture of a black hole. For all intents and purposes, it isn’t a very good picture.

Reading this write up gave me a much better appreciation for the difficulty of actually just capturing that image, which is, I'm sure, what people wanted me to focus on when seeing the photo, but that sort of context requires a write up like this, and can’t be relayed through a blurry small picture.


does anyone have a link to the original, full-size image of the black hole? i can only find 800x600 versions laying around. I want to know how large the original is. =)

mises 5 days ago [flagged]

Honestly curious: why did Bouman get the credit she did? According to Github, she didn't commit nearly as much code, 2.5k LOC vs 850k lines. I understand she migut have had a greater role in management etc.; my point is more that it was a team effort (which she reiterated). Is this a case of the media trying to "celebrate diversity" by unfairly cutting out the whole team which accomplished something incredible? Or is she just the public face and responsible for their media presence?

Please don't drag us down into previous hells.

> Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents.

https://news.ycombinator.com/newsguidelines.html


She is credited because she did the work.

It's telling that when it's a woman getting the credit you're hanging on about teams, but when it's a man getting credit you say nothing.

https://mobile.twitter.com/thisgreyspirit/status/11165185449...


Because that's how the world works, project leaders get fame and line workers don't.

Somehow people are OK when Elon Musk gets credit for Tesla and SpaceX, Steve Jobs for the iPhone but suddenly if it's a women people will dig the github accounts.


I don't think the Elon Musk or Steve Jobs analogy is accurate. Katie Bouman was co-leader of one of four imaging teams on the project. It seems to me the other person who co-led that team deserves equal praise and fame, and perhaps also the leaders of the other 3 teams.

Having said that, the look of excitement and joy on Dr Bouman's face in that photo is so lovely and relatable that the pic was always destined to go viral. So in that sense you could say she was bound to become the human face of the project.


Really nobody thinks Steve Jobs build the IPhone all alone in a work shop.

[flagged]


If this symbol of female empowerment makes girls pursue careers in science then I’m cool with a bit of women glorification.

> Is this a case of the media trying to "celebrate diversity"

There are several similar releases every year. Finding the Higgs boson. Landing the Rosetta stone lander on a comet. Launching a new kid of a rocket.

Were you questioning people doing these releases the same way? Why not? Is it maybe because they were of the expected gender?

Before accusing people of biases it's best to examine yourself.


My guess is because the people who found the Higgs boson, landed the lander on the comet, etc. are for all intents and purposes not well known: there wasn't a lot of fanfare for them so much as the discovery. As such, people are oddly upset about her getting praise for this.

I look at it two ways: one, what she did was not insignificant. Whether she received more praise than someone discovering something else that happened to be a guy, not a concern that is going to cause me to lose sleep at night nor is it something she should be punished for. Two, if her story getting bubbled up inspires girls (or anyone, I suppose) to get involved in or at least more interested in the sciences, hell, I'd say we're all better for it.


I mean, even if you believe that all gender (and minority in general) issues have been solved in the sciences, as of [pick arbitrary date between 1980 and 2019], and there is no longer any dismissal, harassment, or discouragement of women pursuing a career in the sciences, I think we as a society can handle another forty, fifty years of rubbing every "see? Women can too do science as well as the menfolk" achievement in the metaphorical face of the centuries of dismissing women as by-and-large sub-intelligent creatures not temperamentally suited to any serious intellectual work.

Even if it is completely unnecessary nowadays, because women are no longer dismissed, individually or collectively, as not contributing to science, because women are no longer harassed, sexually or misogynistically, when they try to pursue a career in science, I still think it's fair to let people glory for awhile longer in achieving what people once thought they wouldn't, even if most of the disbelievers are centuries dead and hardly any of them are still alive and posting on the internet.


I recently had a percussionist in a band I was in assert that "women don't have the brains to be mechanical engineers", and he honestly seemed to be serious. These people still exist

edit: I realise the post I'm responding to is sarcastic, but not everyone who thinks women are intellectually inferior to men is a troll


This might be the best comment I’ve ever seen on HN

I don't disagree with anything you said here.

Well, the thing about a black hole - its main distinguishing feature - is it's black. And the thing about space, the colour of space, your basic space colour, is black. So how are you supposed to see them?

It seems like the main issue here was that it was very far away and comparatively small to other intergalactic features like, say, galaxies themselves.

Forgive my ignorance, but black holes are quite common in the Universe, right? Why couldn't we simply take a photo of one that's closer to us?

One reason is that the closer holes were so much less massive that they were no easier to snap.

Just compare it, Messier 87, to ours, Sagittarius A*: https://en.wikipedia.org/wiki/List_of_most_massive_black_hol...


They’re not that common actually and hard to spot since space is mostly empty. This was pretty much the easiest one to detect.

"The new black hole picture isn’t really a discovery, but it is a stunning accomplishment."

That "picture" is a stunning accomplishment in deception -- following the steps of Elizabeth Holmes and Bernie Madoff.

This EHT team took white noise from their telescopes, then creatively converted that white noise into one of theoretical pictures of a black hole.

==============

https://youtu.be/UGL_OL3OrCE?t=2242

37:22

And you can notice like at the bottom we get really terrible reconstruction, just cause if it fits the data very well, because you know it maybe wants to smooth out the flux as much as possible and we don't select things like that in the true data.

==============

They simply delete image interpretation because it does not fit the theoretical image that they want to see. How convenient. They call it "Calibration Free Imaging":

https://youtu.be/UGL_OL3OrCE?t=1179




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: