How many pixels a stitched panorama has is not as clear cut I guess. Just because you stitch a panorama at a certain size does not mean that you actually have information for that much resolution - and it's hard to say what size you should stitch at since, for any notrivial panorama, how many source pixels contribute to an area in the output will vary greatly depending on where on the panorama you are looking at due to the nonlinear transforms involved. E.g. stiching an equirectangular projection from ~rectilinear source images will give you much more resolution at the center line compared to the top/bottom, which are just one point stretched across the whole width (for a panorama covering the whole 180 vertical angles).
Then there is also the question of what the source resoltion actually is - most commonly the quoted resolution of a camera (i.e. the "Megapixels" on the maketing material) is usually the number of sensor "pixels" but each of those is only a single color so the resulting image won't really have that many RGB pixels of information even if it is developed at that resolution - the will be quite a bit filled in by interpolation.
Then there is resolution loss from suboptimal focus due to needing to cover a large distance range, lens imperfections or just the air when things are far enough away.
So while the resulting image might have 120 000 000 000 pixels, it might not actually have more than 80 000 000 000 pixels worth of information. That's not to say it isn't impressive, just trying to point out that a simple gigapixel number might not actually say what you might think.
It's like Google Maps, but you're inside the earth pinned at the center and the images are projected onto the sphere. They load in tiles, so it's trivial to get it to be this fast.
Crazy thing is that this is like only half of New York City, south of 34th street. Basically all of midtown is north of this with some of the most impressive buildings and density. Nevermind uptown, the Bronx, and Queens. Unless I'm missing someway to look north.
having lived in queens for long time, I agree, I usually would say Manhattan though, tbh, I'd sometimes make fun of someone and just say "the big city" when they are reluctant to go into manhattan for whatever reason
Born and raised in Manhattan. Live in Brooklyn. When I’m returning home I say I’m “going back to the city”. When I’m in Brooklyn going to Manhattan I say “going into the city”.
That really depends on what you would call a live stream and how fast the camera can cover the whole panosphere. Having the whole image updated every minute (probably not realistic for this resolution) could still be considered a live stream.
And the distance doesn't really help the privacy implications if the resolution is big enough that you can zoom in far enough to identify individual humans. Then again, expectations of privacy outside and at locations visible from outside are probably going to be violated at some point anyway.
I feel like if I'd be a potential buyer of such a camera I'd also be in the position to install many cameras in many places and get a better perspective.
What's this give you? Passive iris scanning at a distance maybe? An ability to read people's phone screens? What's the problem this is solving?
This is the real enterprise use. I've worked on multiple Earth Cam integrations on NYC high rise construction projects. Honestly wasn't fun to integrate, and the integration (at least in 2008-2016 were all over Java. They might have updated since then, but the purpose was mostly for "toy" integrations so that condo purchasers could watch the construction on their new multi million dollar home years before they get to move in.
I'm fairly certain it was never really used by customers, but the executive team LOVED it, so any time the API failed (usually needed to upgrade the API client because of a breaking change), it was an all hands on deck and a middle of the night/weekend patch.
> The GigapixelCam X80 is $24,995 as fitted, but if a client does not need a robotic version, those are less expensive. EarthCam’s line starts at $1,900 for a time-lapse camera with solar power and goes up from there.
well, it didn't take very long to find a naked guy (not PARTICULARLY revealing thankfully, in the building that says THE EPIC)
there's a guy working out on a peloton or similar on his balcony. Can you find him?!
This would be really neat to do again in the summer, when presumably(?) the roof decks are full of people. The rooflines are pretty desolate looking right now.
The near stuff is incredibly detailed as well! Look backwards at the building the photo is taken from and you can read labels on the bolts and see flakes of rust on the grate below.
Holy cow - I literally was just about to make this same exact comment. I was like, that's pretty incredible resolution on the rust on the steel grating. Wait, what about this bolt over here - and bam! I can see the knicks on the flathead cut surface. Pretty amazing.
Didn’t think this was going to be too impressive. Then I started zooming, and realized how many people I could spy on in their windows and in the streets.
If you look to the right, just in front of the Eugene (one of the newer Hudson Yards high rise buildings) the top part of the crane is split. There have to be other similar examples throughout the photo.
I know megapixels are used when discussing cameras but I find the unit hard to understand. It would be so much more informative to know the resolution of the image in pixels.
s/the camera might do some processing/the camera is almost always doing processing and only captures ¼ of the information content compared to what its marketed resolution implies/
That is, the sensor will have that many "pixels", but each sensor pixel only captures a single color and the rest is interpolated.
Agree. A consumer camera might advertise too many megapixels, while more professional cameras, like medium format cameras (the ones used for architecture, for instance) do not distort it so much.
In this, it is like the node size. It is not directly comparable but gives a reference to talk about.
What you really want to know is the information content (in the information theory sense) of the image. The number of pixels is somewhat arbitrary depending on the processing involved - and especially for nonlinear projections like used here, there is no definite answer on how many pixels it should have beyond an upper and lower bound.
NYC underground water pipes are very old and run at low pressure. Buildings above a certain height need to pump water to roof and then gravity feed it.
If you search NYC water tanks on YouTube there are some good documentaries of the companies that build them.
The latter. These tanks are still supplied by the municipal water system but storing it in the tanks allow for consistent water pressure in tall buildings.
“This photo was probably created over several weeks, but that being said we could run a simple version where it runs 77 photos at 70mm to create a 2.6 gigapixel photo and that takes 15 minutes,” Cury explains.
I'd guess 10 minutes up to an hour on modern hardware, say a new mac studio. That would be in lightroom - I'm guessing they're using optimized software for this though.
Taking the actual pictures takes at least couple minutes. The Sony A1 can take 50mp shots at 20 fps - pretty amazing. That's (conveniently) 1 gigapixel per second. So minimum 2 minutes to capture this entire image. Add at least a second between shots for the robotic housing and we're talking 4+ minutes.
Could certainly be used to capture most of the faces in a huge crowd.
Nothing wrong with reposts when they generate new discussion. At least submissions for this domain didn't seem to take off before so many people would not have seen it yet.
https://petapixel.com/2021/04/27/this-120-gigapixel-photo-is...