Disagree. It's only the author's fault. We can't expect engineers to write code and find every last bug in other people's code especially when PRs can be thousands of lines to review.
A broken culture would be expecting engineers to find every last bug in other people's code and blaming them for bugs.
When issues happen, it is almost never just one person's fault.
Just trace the Whys:
1) Why did the bug happen? These lines of code
2) Why wasn't that caught by tests? The test was broken / didn't exist
3) How was the code submitted without a test? The reviewer didn't call it out.
Arguing about percentages doesn't matter. What matters is fixing the team process so this doesn't happen again. This means formally documented standards for test coverage and education and building team culture so people understand why it's important.
It's not "just the author's fault". That kind of blame game is toxic to teams.
Our code review process involves a reviewer explicitly taking ownership of the PR, and I think it's a reasonable expectation with the right practices around it. A good reviewer will request a PR containing 1000s of lines be broken up without doing much more than skimming to make sure it isn't bulked up by generated code or test data or something benign like that.
Ah, we draw a distinction between checked-in generated code and JIT generated code, and the former does show up in code review (which is sometimes the point of checking it in - you can easily spot check some of it to make sure the generator behaves as you expect).
Google has a culture of small PRs. If your PR is too big (think more than 500 lines changed) you’ll actually not be able to get readability reviews on them - unless someone in your team does the readability review for you.
A broken culture would be expecting engineers to find every last bug in other people's code and blaming them for bugs.