Purely speculating, not going on anything they’ve said: theoretically when they have more satellites you might be able to patch together a larger area with simultaneous video from several birds. You’d have a slightly ugly angle difference at the seam, of course. There may be technical constraints here that I’m not thinking of, though.
In any case, at 1 m GSD and 30 fps, 1.1 by 2 km is 66 megapixels a second. Even efficiently compressed, that’s a lot of information already.
The second is that receiving data from that many birds is problematic. There are only so many receiving stations and only so much EM bandwidth available for communication and recovery of the images. Even switching to drones doesn't solve the problem. Even as of last year bandwidth was a limiting factor for the number of drones up in the air at once.  So now we have two problems, getting birds in the air, and we still have the bandwidth problem for downlink.
Finally, analyzing that much imagery is nearly impossible. Automated algorithms help some with certain tasks, but storage, processing, accessing, then analyzing all of that on a continuous basis is pretty much impossible. So even if getting enough sensors deployed is solved, and getting bandwidth is solved, it doesn't mean there's enough people to physically look at and interpret the results anyway. And storing all that for later instantaneous on-demand retrieval is still a problem nobody's really solved.
1 - http://www.defensenews.com/article/20130331/DEFFEAT02/303310...
The obvious solution in this case is to combine images from multiple satellites, ala http://en.wikipedia.org/wiki/Very_Large_Telescope
youtube/ vimeo direct:
Another effect here is that Skybox’s video uses the pan band, which includes infrared to 900 nm, and smog is usually pretty transparent in NIR, depending.
And a third important factor is that smog is very diffuse (by definition: otherwise you call it a smoke plume), so if you have the bit depth you can just increase contrast until you get a good image.
(I work at Mapbox on satellite imagery, but wasn’t involved with this particular blog post; what I’m saying here is stuff that people in remote sensing Just Know.)
Professor Yang Aiping, an expert in digital imaging with the School of Electronic Information Engineering of Tianjin University and leader of the civilian team, said she was facing tremendous pressure because of the enormous technological challenges.
"Most studies in other countries are to do with fog. In China, most people think that fog and smog can be dealt by the same method. Our preliminary research shows that the smog particles are quite different from the small water droplets of fog in terms of optical properties," she said.
"We need to heavily revise, if not completely rewrite, algorithms in some mathematical models. We also need to do lots of computer simulation and extensive field tests."
Specifically there is a lot of work in the mapping industry for taking a satellite image and turning it into a map. But there isn't nearly as much (if any) work about having a satellite, which can image the same spot repeatedly, providing a downlink of the the map rather than the image. By re-imaging you can just punt on features that are hard to guess and wait for the next pass to see if they will get easier. You might also be able to use multi-angle shadow analysis to pull some 3D feature extraction as well.
Simultaneous video surveillance of 8 x 8km with a single drone, at 15cm resolution.
I'm afraid to say that while this is neat, realtime video maps of the planet will be provided by drone tech, not satellites - for now. When we can stick something in geosync with amazing eyes at a low energy cost (space elevator), this picture may change - but until then... nope.
> As ARGUS floats overhead for months at a time, it dragnet tracks every moving person and vehicle and chronographs their movements, allowing forensic investigators to rewind the footage and watch the activities of anyone they select within the footage.
The amount of footage just one pod collects in a short span of time is monumental. They're storing what amounts to decades of footage for just one day's worth of operation. Maybe they have some criteria that trims unnecessary footage, but I can't imagine that saving all that much space as they don't want to miss anything.
Edit: Sorry, understood your comment now. The real problem isn't resolution, it's "getting a good angle". Unless your target and the satellite's orbit are magically aligned at a precise moment, you're going to have an extremely hard time getting a satellite into position in a reasonable time (the best commercial satellites can do is around 6 hours).
Robinson Meyer at “The Atlantic” did a nice overview two months ago: http://www.theatlantic.com/technology/archive/2014/01/silico...
It’s an interesting time in remote sensing.
They are owned by Longford.
Longford owned perfect cover for intelligence assets in middle east (oilfields in Iraq), all of a sudden they sold those and took over spy sat startup.
CIA money laundering/cover for intelligence assets type of deal.
(Yes, you'd think they'd update their website.)
I love living in a video game.
and says its from Skybox’s SkySat-1.
What sort of things would you want an API for?
Urthecast seems like it provides images, what would be really awesome is if they offered some short time interval based image refreshing. (acquire new images every few minutes)