Also, a good friend of mine developed this digital scope so you can zoom in the picture easily without going back to the 90ties internet experience:
I'd be happy to answer to your questions :) Enjoy!
(Not related to the project, just checked their website)
These mounts are amazing pieces of engineering. I want one just to fondle the finely machined and anodized aluminum.
“In an eternally inflating universe, anything that can happen will happen; in fact, it will happen an infinite number of times. Thus, the question of what is possible becomes trivial—anything is possible […] The fraction of universes with any particular property is therefore equal to infinity divided by infinity—a meaningless ratio.” -Alan Guth
That image evokes all those feelings.
Then I remember interesting probably = bad, and we may not be living to appreciate it all.
“Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.”
I mean, you can see every place on this planet within a few seconds if you are taking a look at some satellite photo and yet you would not have seen your house and the living beings around it. Similarly, there are many more details which we do not perceive in our everyday life and which you can explore without have to invent a couple of scientific miracles ;-)
It is not that I don't value the stars. It is just that I think we shouldn't dream of arriving at some wonderful planet one day when we already are on a wonderful planet. We just don't appreciate it like that.
We do live in an amazing place.
You might need eyes the size of a satellite dish to see them all though.
In a very shallow, simplistic sense, we see all these other amazing phenomena out there. Our home seems like a little rural backwater, crossroads and a store kind of thing, by comparison.
When the full image was loaded up in Chrome, Chrome Helper showed up with ~1.30GB of memory consumption; it sometimes went up to 5.60 GB on repeated tests and quickly reverted to ~ 1.30 GB on average.
Preview consumed ~550MB at the beginning and further zooming /zooming out consumed ~250 MB at average.
Chrome, struggled a bit after clicking on Zoom. Preview gave a low resolution image when Zoom was clicked and then gave the original resolution. Preview's user experience was comparatively better.
Intel power gadget showed CPU spikes during these actions, but nominal GPU spikes; I don't think metal is being used in preview.
Edit : RAM : 32GB, Core i5 7th Gen
Can anyone comment on how much the images produced by these filters differ from what the human eye would see if somehow it was able to look at these objects. Are they also taking in information from the non visible spectrum and coloring it or is this all just a focusing of a light that real humans would have been able to perceive?
I know they mentioned using different filters to achieve the two different images but was
First of all, emission nebulas are not very bright, so no telescope can give a picture as bright as a long-time exposure does. If you can see a nebula through a telescope at all, it will be very faint. Which triggers another effect in your eye: the cells for color reception are not very sensitive. Like with general night vision, you will see nebulas usually only with your light-sensitive receptors, which don't see colors. So it will appear in a grey-greenish color.
High quality pictures of nebulas are taken at very specific wavelengths, of common emission frequencies, you listed them. Even at high brightness, they wouldn't directly convert into a good color picture, as 500nm is turquise, 656 and 672nm are very deep red. A color image converting these wavelengths directly into RGB-values would be not very impressive, it would look more like the bottom image on the page. So usually a color mapping is used to generate impressive images which also show a lot of the detail information. With 3 different "colors" in the source image, you can apply an arbitrary transformation to generate an RGB-image. For example, most images from the Hubble telescope use a common mapping which is consequently called the Hubble-telescope mapping.
Like shown on the page, you can create very different looking images from the same data set by choosing the color mapping.
Short answer: the colors they mapped to green is actually closer to red and the color they mapped to blue is closer to green, so it would have less cyan (blue and green) and more magenta (blue and red). It would probably look a little more purple-ish.
- Remotely-controlled observatory at the El Sauce Observatory in Chile
- A 160-mm APO-refractor telescope and a Moravian CCD
- Presumably hefty image processing requirements
I get that these guys are amateurs in that they are not being paid for this but presumably this costs some serious money? Or are the components they use in reach of a well to do hobbyist these days (all relative I know)?
Thats just the big ticket stuff. Theyll have a guidescope, colour filters, laptop (i assume), all sorts of paraphernalia supporting the effort.
Given that its remote controlled and in an observatory in Chile I suspect that adds another order of magnitude to the cost. But I'm unsure specifically how much, or if they're renting scope time.
You can buy much cheaper equipment and still do admirably, this set up is really quite extreme for a hobbyist.
Sounds like it took several trips and weeks to get it right.
It'll likely spend a few million years burning hydrogen before going to helium and heavier elements for a few thousand years.
A black hole is almost inevitable.
Since information and gravity movement is limited by the speed of light the effect of 'we' is by our age, so to a 100 light years for all but a very few people.
So I am hedging my bets by saying it's wrong, or that it's so obvious its redundant.
The main thing about the capture settings is to use RAW. Other settings ISO/exposure time/etc is dependent on camera being used. However, whatever you can do to capture as much light as possible within each frame is the goal.
The software does image alignment rotate/scale/etc to do the stacking. You can stack images taken of the same object from different physical locations. Spend a weekend in the desert shooting an object, then spend another weekend the next month at the top of a mountain shooting the same object, and all of the images can be stacked.
The telescope alignment precision is important, but less so than it used to be for a couple of reasons. With gear available today, you can take "portable" telescopes into the field, do a decent polar alignment and then allow the guide scope/software to correct for any imprecision of the main scope's alignment and even tracking issues from manufacturing issues with the mount's worm gear. A guide scope is a second smaller telescope (wider field of view) attached to the main scope with a camera attached to it. That camera is connected to a computer running the guide software, and will track a designated star. The guide software will talk to the telescope's motors, and can speed up/slow down the motors to keep the guide star to within a 1/4 pixel deviation.
Also, with digital cameras, images of shorter exposure times are taken and then stacked in software. There's multiple benefits to doing this. Consider exposing a single frame for 60 minutes, or 12 5 minute exposures, or 30 2 minute exposures. If anything bad happens during that exposure (a plane or a satellite crosses your view, someone uses a laser pointer through your frame of view, a bug lands on your primary, etc) it's not "that big of a deal" to capture it again. Also, digital camera sensors tend to get noisy with longer exposures due to heat build up around the sensor (a problem film cameras do not suffer).
Maybe worth pointing out that film has its own issues with long exposures, though. If I remember right, film's response to light isn't strictly linear with exposure time so you get less and less useful additional exposure as you expose longer.
While digital camera sensors usually pick up noise for long-time exposures, this is less an issue for astronomical cameras, because they fight this noise by cooling the CCD-chip. Usually the chip is cooled via a Peltier-element to temperatures below -20C, where thermal noise is very low.
I think filtering out "local" stars should be very doable given ML/CV progress.
It should probably be changed to a shortened form of that.
That site will help.
But if you're willing to accept some optical aids like a reflector and eye piece, a large amateur "light bucket" dobsonian telescope can unveil deep space objects to the naked eye.
I don't think it's possible to get anything like these photos though, the sensor is collecting light over a very long duration to present as a single image. The only way to get more light into your naked eye real-time is with more aperture, obviously there are practical limits there.
This was among the most breathtaking things I've ever seen (the other being a particularly vivid showing of northern lights in Alaska). The southern hemisphere's sky is infinitely more exciting than the northern one.
Fractal nature of?