There are many cases where someone will write tests that hit an endpoint directly and then assert on the whole response, which in this case is quite huge. They'll then do so for all branches.
Their library / service / etc. code is technically exercised, sure, but doTheFooThing() isn't directly tested, so it could have a bug that is only exposed from another caller with different parameters that would be caught with direct testing. Extreme coupling happens all the time.
Now I'm slowly converting it to sanity, and my teammates are copying me.
To be fair, it was one of those "Get this out now because we're dying" kind of codebases, not do to lack of skill.
But once code is written, it's hard to undo. Then the bad pattern becomes "keeping the same style"
If doTheFooThing() is called from somewhere else than that somewhere else should also have tests. So I find that an argument from "purity" more than practical consideration about bug probability.
Also, if you only test doTheFooThing but not the API then you could accidentally refactor yourself into breaking the API in a backwards-incompatible way (or not be bug-compatible, which is sometimes required, or at least you should detect it and check logs and warn consumers). So the API tests are needed anyway.
There's a balance of course, if doTheFooThing() is an important internal cross-road, or if it is algorithmically non-trivial, or important for other reasons then it should be tested in seperation. But between only semi-integration tests (hitting endpoints and checking responses), and only lots of small unit tests that break or needs rewriting for the simplest refactorings but doesn't catch subtle API breakage, I'd want to work with the former any day. The units of code are often trivial where mistakes are not made, and the mistakes comes when stringing them together, and then it is more difficult to trust the human capacity to figure out failure scenarios than just run the real handlers.
Having full unit coverage with full path coverage is a great ideal. The reality is that for most companies the overhead of having and maintaining these tests is impossible from a business POV. Getting buy-in to spend more than 50% of your time maintaining (writing, updating, verifying) tests is a very hard sell. For companies with under-staffed/over-worked development teams, it just doesn't happen.
At this point in my career I'm firmly in the camp that functional testing is really what matters in most cases. It is a compromise between the realities of business and the needs of engineering. I can test the end product of the development work as a whole and identify that everything works as expected and has no known strange side-effects. This also serves as a contract with your business users as to functionality of the product. If bugs are discovered you can add specific use-case tests to attempt to trigger the bug and prevent regression. All of this does not preclude limited unit testing as well for critical code paths. I find this to be a much more pragmatic approach.
In the case of SQLite I think it mostly is because of hard work to fulfill the ambition to deliver a robust project and because of the existence of fuzzing, which can automate test generation.