The atmosphere is a problem but there is an interesting effect in computational photography  where you take many pictures of the same thing through interference and you remove the interference. In theory each camera could provide its GPS co-ordinates (accurate to 20m or so), the time of day (accurate to the second at least), camera orientation with respect to the gravity vector (3 axis accelerometer), and possibly the orientation of the magnetic field. The million dollar question then is how much can you use that information, and the image data, to construct a computational model of the light field as it is incident on the planet at a particular time, and from that identify and display sources of that light.
Clearly you'd need a significant chunk of computer power to post process that data. But it might be interesting.
I did want to see what the state of the art was though, and they use rigid steel on concrete foundations and "path compensation" to deal with the alignments problems (quotes because I am quoting from outside my vocabulary...).
Check out EM reconstruction. Electron microscopy images are taken at very terrible resolution. But you pick out tens of thousands of them, from different angles, and you can average them to get a very high resolution average.
I think the difference between EM reconstruction and ... "stellar reconstruction", as it were, is the relative amounts of parallax. When we take EM images of sub-microscopic objects, we can take them from appreciably different angles. When we take cell phone camera images of a star, we can't.