Human eyes are unlikely the only thing in parameter-space that's sufficient for driving. Cameras can do IR, 360° coverage, higher frame rates, wider stereo separation... but of course nothing says Teslas sit at a good point in that space.
Ah yeah, that's making even more assumptions. Not only does it assume the cameras are powerful enough but that there already is enough compute. There's a sensing-power/compute/latency tradeoff. That is you can get away with poorer sensors if you have more compute that can filter/reconstruct useful information from crappy inputs.