I don't know what you mean by "surface", but the number of photos collected depends on the area of the lens. No quantum mechanics or interference involved.
I did mean area. And I thought the number of photons was proportional to the square of the area because the probability amplitude is proportional to the area. Therefore the probability should be proportional to the square of the area, shoudn't it?
It also seemed to make sense to me otherwise we would not need to build large telescopes, we could just build lots of small ones and fuse the images.
No, the probability is proportional to the area, not the amplitude. If a sensor of size A sees N photons, a sensor of the same size next to it sees another N. Fusing these two sensors together see 2N, not 4N. Otherwise, you violate energy/momentum conservation.
We do build small(er) ones and fuse the images (using interferometry), that's what many large telescopes are these days. In the radio, we've been mostly doing it that way since 1978 (VLA).
... we use interferomers because we want the added resolution, not just the additional photons. If you don't want extra resolution, you can just add up the images from all of the smaller telescopes.
Light is a wave and a particle, and if you are getting wildly different answers from thinking about it as a wave and as a particle, and you're looking at a macro and not micro scale, then you're doing it wrong. That's why I answered your wave question with a photon count answer.
> That's why I answered your wave question with a photon count answer.
But how do you count photons without using probability amplitudes? If you count them by using a classical reasoning of photons being small particles falling from the sky, I'd say you're doing it wrong, because photons are not classical particles.
CCDs count photons in a particular fashion, and it happens to involve individual photons doing things. You might think of it as photons knocking electrons off of atoms, but it's actually semiconductors with a narrow bandgap, so CCDs work at much lower energies than needed for ionizing radiation.
If the probability amplitude is proportional to the area, it doesn't matter if that term already encompasses a length² term. QM tells you then that the probability should be proportional to the square of that amplitude, even if that means it would encompass a length^4 term.
Not sure why I cannot reply to your other comment.
Anyway, astronomy has nothing to do with the scarcity of photons.
If you take a picture of M31 you have a relative abundance of photons, with most of them not being point sources.
If you look at any planet or globular cluster or nebula is the same.
Only when you are observing single stars you are having that. And single star observations are for sure not the whole astronomy.
HN requires a certain delay before responding, to encourage people to think first.
Many distant things, like galaxies, are very dim. M31 is almost visible to the naked eye, which is not dim to an astronomer. You might want to study astronomy before having strong opinions about it.
Seriously are you asking me to "study" astronomy?
I was doing astronomical research (planetoids, variable stars, supernovae) 20-something years ago.
And M31 is not "almost" visible to the naked eye. It is perfectly visible unless you live under a light polluted sky.
The objects that I captured I think were just below the 17th magnitude.
As I said before extremely dim objects cannot be seen simply increasing the aperture because that is helpful only for point sources.
Hubble Space Telescope captured some of the most dim objects in the universe.
With a "very small" mirror compared to some much bigger on this planet.
I'm aware HST has a relatively small diameter (2.4m), but how much of its resolving power (for lack of a better phrase) can be attributed to it being above the atmosphere? I've also heard that modern adaptive optics systems do a good job of mitigating atmospheric effects (turbulence?) for ground based telescopes -- I wonder if a ground-based "Hubble deep field" image could be generated?
Most of the HST resolving power is due to the fact of being outside of the atmosphere.
Adaptive optics have been improving in the last 20 years if I recall correctly, but even with the advancement in lasers to generate improved artificial stars and in the segmented mirrors to have a better atmospheric correction in real time, they still can't do anything regarding the loss of trasmission, especially in wave lengths different from the visible light window.
The fact that with a "small" diameter in nearly-ideal conditions you can have better results than much bigger apertures on the earth is still a testament that aperture alone can't do miracles, especially when there is a lot of noise.
And don't forget that HST optics are flawed.
Before the servicing mission that added the correction we were relaying on software adjustments.
And even with that huge handicap the results were nothing short of amazing.
That is valid only for point sources.
Stars are not points when seen in a telescope because of airy disks, atmospherical scattering and imperfections in the optics.
If you have perfect optics, you are outside of the atmosphere and the only limit is the airy disk then yes, for point sources you don't have that problem.
Now, are you saying that a valley is a point source?
If you look back, I made a statement about the number of photons collected. I totally understand that camera people, who have a shitload of photons available most of the time, don't think about them like astronomers do. But if it's very dark, it becomes astronomy.