A spherical cuttoff limit is also not how the human eye sees. Light can travel from any distance to reach your eye. The view frustrum is an optimization, and there will always be cases where it fails.
> Light can travel from any distance to reach your eye.
Yes, but it won't necessarily register so unless we're talking about insanely bright distant object that should be visible through fog from any distance and which also is not a part of the skybox, this particular problem practically never arises. The flat far plane, on the other hand, is glaringly obvious in e.g. Minecraft, or any Unity game with first-person view and fog.
You're describing a problem with a particular method of fog rendering. The correct way to address that would be to change how fog is rendered. The perspective projection and the far plane are simply not the correct place to look for a solution to this.
I disagree. This problem exists even when the fog is completely absent and also distorts the objects at the sides of the screen regardless of the fog's presence or absence. I guess you could use fog, rendered in a particular way, to make it less noticeable but it's still there. So the root cause is the perspective projection.
Now, I've googled a bit on my own, trying all kinds of search phraes, and apparently it is a known problem that the perspective projection, when wide (about 75 degrees and up) FOV is used, will distort objects at the side of the screen. One of the solutions appears to be a post-processing pass called "Panini Projection" which undoes that damage at the sides of the screen. From what I understand, it uses cylinder (but not a sphere) as the projection surface instead of a plane.
You originally described a problem where fog had a different falloff in world space at the edges of the screen compared to the center of the screen. The root cause of that is not the perspective projection; it's how the fog is being rendered.
The issue you are describing now is called perspective distortion (https://en.wikipedia.org/wiki/Perspective_distortion), and it is something that also happens with physical cameras when using a wide-angle lens. There is no single correct answer for dealing with this: similarly to the situation with map projections, every projection is a compromise between different types of distortion.
Anyway, if you're writing a ray tracer it's possible to use whatever projection you want, but if you're using the rasterizer in the GPU you're stuck with rectilinear projection and any alternate projection has to be approximated some other way (such as via post-processing, like you mention).
But it might more accurately reflect human visual perception. I can’t think of any case where your peripheral vision will perceive things that you won’t perceive by looking directly at them.
The only physically accurate answer for where to put the far plane is "behind everything you want to be visible". It fundamentally does not make any sense to change the shape of the far plane to "more accurately reflect human visual perception" because there is no far plane involved in human visual perception, period.