Information on how this detection was done (from the original paper)
The raw data were transferred to the ilifu supercomputing cloud system and reduced there. Bandpass, flux, and phase calibration, along with self-calibrated continuum imaging, was performed using the PROCESSMEERKAT pipeline,1 which is written in PYTHON, uses a purpose-built CASA (McMullin et al. 2007) Singularity container, and employs MPICASA (a parallelized form of CASA). Data were at this stage rebinned by a factor of 4 (i.e. to ‘8K’ mode, 104.49-kHz wide channels), and the 1304–1420-MHz segment of the L band was extracted for the results presented in this paper. Model continuum visibility data were subtracted from the corrected visibility data using the CASA task uvsub. A second-order polynomial fit to the continuum was then calculated and subtracted using the CASA task uvcontsub for all channels to remove residual continuum emission from the spectral line data. Finally, spectral line cubes were created using tclean with robust = 0.5 and no cleaning. The RMS per 104.49 kHz channel was consistent between 1304 and 1420 MHz with the per-channel noise of 0.16 mJy beam−1 at the centre (1362 MHz). All channels were convolved to a common synthesized beam of 14.0 × 9.8 arcsec2 at a position angle of −20
1.
What would happen if you trained ai on images of the sky, then had it verify in those locations from the meerkat telescope, then had it search the rest of the sky. I bet we could have a pretty huge database of info pretty quick.
I think a better way to read this headline is 'Astronomers exceedingly pleased with performance of instrument' - it's finding the types of things they want to look at, where they want to look. If it was about galaxy count - the Hubble Deep Fields found thousands of galaxies per shot, starting almost 30 years ago.
Fun fact, the Hubble Deep Field took 140.8 hours to image an area 2.4 arc minutes on a side. It would take Hubble 38 thousand years to image the entire night sky to that depth.
Identifying galaxies in images is not the hard part of finding galaxies. I assume that they already have software that does that because galaxies are easy to identify because they are blobs.
The hard part is taking the images with lots of galaxies. Most telescopes have a narrow field of view and they are busy so can't survey the whole sky. Rubin Observatory in Chile is going to survey sky every few nights. The Roman Space Telescope will be wide-angle Hubble.
The raw data were transferred to the ilifu supercomputing cloud system and reduced there. Bandpass, flux, and phase calibration, along with self-calibrated continuum imaging, was performed using the PROCESSMEERKAT pipeline,1 which is written in PYTHON, uses a purpose-built CASA (McMullin et al. 2007) Singularity container, and employs MPICASA (a parallelized form of CASA). Data were at this stage rebinned by a factor of 4 (i.e. to ‘8K’ mode, 104.49-kHz wide channels), and the 1304–1420-MHz segment of the L band was extracted for the results presented in this paper. Model continuum visibility data were subtracted from the corrected visibility data using the CASA task uvsub. A second-order polynomial fit to the continuum was then calculated and subtracted using the CASA task uvcontsub for all channels to remove residual continuum emission from the spectral line data. Finally, spectral line cubes were created using tclean with robust = 0.5 and no cleaning. The RMS per 104.49 kHz channel was consistent between 1304 and 1420 MHz with the per-channel noise of 0.16 mJy beam−1 at the centre (1362 MHz). All channels were convolved to a common synthesized beam of 14.0 × 9.8 arcsec2 at a position angle of −20 1.