The pic is cool.
The image is fantastic but one of the things that makes the story cooler, I think, is that even in 1979 the IBM 7040 was something of a relic. Although only introduced in 1963, it was superseded as early as 1964:
Just goes to show that if old computer hardware is all you have access to you can still do useful and amazing things.
You plug in the viewing angle, black hole mass and some other constants which are defined in the paper (might need some fiddling to get them to look nice).
Then I think the plotter is just some monte carlo sampling and then inverted.
Oath is really a cancer regarding GDPR. It is definitely non-compliant and makes it as worst an experience as possible to control your data.
I hope they will get the harshest fines for this.
EDIT: works with outline.com
I always thought that photos of other galaxies resolved into specks the size of stars in the night sky. This composite of Andromeda and the moon blew my mind: https://apod.nasa.gov/apod/ap061228.html
I think this is actually a better picture as it puts them both in the sky for a more natural comparison (to me): https://i.kinja-img.com/gawker-media/image/upload/s--Mxc-LeD...
Andromeda side to side is ~6 times the width of the moon, IIRC.
Just imagine being able to see that galaxy clearly! In fact (he says bitterly) just imagine being able to seem the damn stars any more. If you've never had a chance to see them, on a moonless night without artifical light wrecking everything, and have the chance to see infinity slowly unfold like a flower as your eyes adjust and take in more, watching the ethereal edge of the milky way appear in greater and greater detail, then you have missed one of the most beautiful sights that exist. Perhaps the most beautiful, perhaps. It really is that special.
"Wow, it's that large even though it's that far away!?"
Article A: Here’s a photo of a another galaxy so far away, showing XYZ.
Article B: We suspect there’s another planet in our own solar system but we can’t find it and supposedly it’s too far away for us to photograph it.
Getting a brighter picture of something that's dim just requires collecting more light - essentially taking a longer exposure. But getting a sharper picture of something that's very small in the sky is limited by both the aperture of the telescope (the diffraction limit) and the atmospheric distortion, if you're viewing from the ground.
In the 1910s, the observables were extremely faint visible-light nebulae, with M13 (Hercules) closely comparable to M33 (Triangulum), although the spiral structure and larger solid angle of the latter was known since 1850: http://www.messier.seds.org/more/m033_rosse.html (This only about five years after the first known resolution of nebulae into elliptical, spiral, and irregular; at the time the linked sketch was drawn there were enough observations of spiral nebulae to decide that M33 must be one as well).
Observations were consistent with an extragalactic but nearby star cluster.
Better observations led to an evolution of what, retrospectively, M33 "was": from closely comparable with M13 to more than a hundred times further away (~ 6.8 vs ~ 850kpc), and tens of billions rather than hundreds of thousands of stars.
Perhaps it's best to think of "was" and "is" in relation to human observations. (This also comes up whenever someone objects to talking in the present sense about today's observations of objects at kiloparsec distances, as in "Sag A* is noisy today" provoking "no, it was noisy 26 000 years ago!" -- and of course hypotheses about what exactly generates the Sag A* observables have not fossilized yet).
There are still large uncertainties about the exact structure of our own galaxy, but the (cosmological) Copernican principle is alive and well in that area of galactic astronomy since galaxy zoos are full of various subtypes of dusty spirals.
I understand that they were using the best models available at that time, and with better equipment it was posible to get a better model so the image got outdated.
I agree that there ought to be clearer indications of which images come from sensor data and which are purely simulated, but all sensor images are heavily processed. I worked on Mars Global Surveyor, Mars Odyssey, MRO, LRO, and other NASA missions, and wrote image processing software for various instruments on those spacecraft (primarily infrared cameras).
They're not simply snapshots like you'll get from your DSLR. Even the most DSLR-like cameras tend to image in many spectral bands (e.g. 11 bands ranging from infrared through visual into UV). An image you see in a magazine typically selects three spectral bands and assigns them to red, green, and blue.
Almost none of the images you see are true color, in part because almost none of the instruments have 3-band RGB sensors like you might see in your DSLR. There are many reasons for this, but the two biggest ones are that much of the universe is opaque to visual frequencies (you need infrared or other sensors to penetrate clouds of dust), and much of the scientifically interesting data isn't present in the visual spectrum (infrared is used to determine mineral composition, like finding hematite on Mars, which is evidence of water in the past).
Images of the surface of planets comes from a complex stitching-together of long, thin strips of data (again in many non-visual frequencies), which are taken from different angles, at different times of day. In order to create the uniform, visually-appealing (and also scientifically-valuable) images of the surface of Mars that you see, the images are heavily processed to balance albedo and adjust other anomalies. One little-known fact is that gravitationally, none of the large, rocky celestial bodies are spherical; the moon in particular is a very lumpy potato shape. This means that an orbit around the moon is nowhere near a perfect ellipse; it wobbles up and down all the time, which means the distance of the camera from the surface changes all the time. All of this has to be compensated for. During processing, everything is stretched to align, and hundreds/thousands of images of the same region from different times (over many years) are averaged together to get the final product.
All of this is a long way of saying that literally every astronomical image you've seen is computer-generated and heavily processed. There's value in knowing which images started with sensor data and which are pure simulation, but there's no value in non-computer-generated imagery.
It's hard to explain this quickly in an image caption. We used to have to deal with conspiracy quacks all the time, demanding that we release the "real images" rather than our "manipulated images" so that they could find the aliens on Mars, but the simple truth is that there are no "real images". It's all sensor data in a form that is completely useless without extensive processing.
I get that there can be discussion of whether or not the processed image data is or isn't an accurate depiction of reality. Sometimes the raw images are good enough to release without any processing. Sometimes the 'raw' image has so much processing and generation of new data done to it to even get to an image (that latest black hole photo being a good example), that it would be disingenuous (at least in my eyes) to call it an 'image acquired using a telescope'. This is indeed a good discussion to have.
My point is however that news sites can't just leave out their image captions and let the public guess about it. That's the worst option. Best would be clear image descriptions, so that at least the public knows what it's looking at. Even if those image captions don't fully convey the complete process of how the image was acquired and processed, but just a short synopsis of the process.
What does it mean to look at a black hole "from the side"?
Is the direction of spin of a black hole even knowable?
Actually the first black hole simulation was rendered way back in the 70s on pen and paper.
"Appears to". Doesn't say it does. Doesn't suggest it does. I think you're being unfair.
> when they don't match up with reality, somehow reality's wrong and not the model or simulation.
Can you point to some examples? This is not something I've noticed myself.
Note especially Figure 2, Figure 3, Figure 4, used to justify that a two dimensional screen is enough to capture all the information.
Obviously there is a lot more going on -- especially the black hole entropy formula which says that entropy growths with surface area not volume -- to motivate this idea, but I always thought those figures were particularly educational.
The NASA video is a visualisation that "simulates" the optical distortion.
It would be a great picture to illustrate gravitational lensing, but it's a not so great one for conveying what a black hole is.
From the picture it seems that particles are swirling around in multiple directions, whereas the mental picture you should get is : there is a standard accretion ring but with gravitational lensing visual filter turned on.
A black Saturn with a Snapchat filter
If you look, you can see both side of the ring behind the black hole :
You just need to imagine a little stronger warping.
You would never lose sight of the accretion disk. And you can see both sides simultaneously.