Great article. One thing that's always stumped me as an amateur photographer is how an object is "in focus" when the rays from that object coverage to the smallest point, thus the smallest amount of pixels are getting hit. It would seem to me you would want to collect the max amount of information and thus use more of the sensor. How does converging to less of the sensor give you a sharper photo?
Ideally, the mapping from object to image is injective. What you're proposing will lead to "hash collisions" aka blurriness, since each point in the object will bleed colors into neighboring points in the image.
The entire object doesn't get collapsed to a single point. Rather, a single point of the object radiates light in all directions. A lens then captures a fraction of that radiation and collapses it back down to a single point. Then we iterate over each point in the scene with a "for each".
The amount of total light is the same, but the smaller the points of focus, the easier it is to distinguish image elements from each other because each point overlaps less with its neighbors.
Here’s a metaphor. Let’s say you have 10 glasses, with 10 different amounts of water in them. When you hit them with a spoon, each glass makes a different musical note: 10 different notes.
If you pour the water from each glass individually into 10 new glasses, you preserve those 10 notes.
If you combine every two glasses into one, now you have 5 glasses, and only 5 notes. It’s the same total amount of water but you lost information.
If you take up a larger part of the sensor, then the distance between any two details must be larger so the areas that their light covers over the sensor don't overlap. If they overlap, you have no clue where the information is coming from.
Look up circle of confusion, Rayleigh resolution criterion and sparrows resolution criterion for the technical details of resolution.
It's more that the rays from a point on an object converge to the smallest point when in focus? The rays from the object as a whole are spread out over an object-shaped are on the image plane, unless the object is (effectively) at infinity, when they will converge to a point...
We started with 1.8 and upgraded to 1.12 finally end of last year.
Our biggest pain point, next to the tendency of amok-running deployments leading to disks filled up with useless logs (leading once to a totally corrupted master after a weekend), was/is that the "official" Jenkins package is the ultimate PITA to upgrade, massively lags behind despite security issues (current: 2.150.3 - mesosphere/jenkins: 2.150.1!) and you can't even run Jenkins outside of DC/OS because it needs the Marathon shared library to work.
Another thing that we dearly missed was the ability to "drain" a node - for example if I want to perform maintenance on a node, but cannot shut it down right now as a service on the node is being used... then I'd like to at least prevent new jobs from being spawned on that node. Or during system upgrades that stopping the resolvconf generator does not restore the original resolv.conf leading to a broken DNS, or when specifying NTP servers by name that the NTP server could not be resolved at boot time (as the resolv.conf still referred to the weird DCOS-round-robin-DNS), leading to DC/OS not wanting to start because the clock was out of sync,...