XML schemas only solve the most obvious validation failures – even if the elements are present as expected, you still need the exact same process to handle invalid data. While XML has XPath, JSONPath exists and the downside to XPath is that many environments are stuck with 1.0 and XML libraries tend to be poorly designed (e.g. namespaces are simply unnecessarily painful in most XPath implementations).
Every significant project I've worked on has the same cautious loading process for XML as JSON, with some extra checking at the early XML load stage because XML is harder to work with and thus fewer people produce valid documents (forget schema validations, errors with simple character encoding, well-formedness or namespace declarations are surprisingly common). In practice, I tend to end up with a forgiving parser, a collection of selectors and full validation on the results, which works equally well with either format.
The "producing valid documents" issue is solved by running the output through the schema before, or immediately after it's saved, and throwing an error if the code has generated an invalid document.
Again, this approach does not work on projects where you can't immediately reject invalid documents. In many cases, this is unacceptable from a business standpoint and so you're forced to attempt to salvage minor conformance problems.
Every significant project I've worked on has the same cautious loading process for XML as JSON, with some extra checking at the early XML load stage because XML is harder to work with and thus fewer people produce valid documents (forget schema validations, errors with simple character encoding, well-formedness or namespace declarations are surprisingly common). In practice, I tend to end up with a forgiving parser, a collection of selectors and full validation on the results, which works equally well with either format.