What do you mean, the original XML payload wasn't flat? At first glance XML is just as inherently flat as JSON (being a text string), but I guess there's some internal referencing semantics for XML documents? What did you really lose moving away from XSLT?
Anyway, I think I'd rather have a nice programming language that's also well-suited to data transformation. I'd ditch XML for something more elegent and flexible and have a transformation language to match. That's my dream, anyway.
What I meant was that it was a collection of different, but related objects. I had nothing to do with creating the structure and content of the payload, so I was stuck using it. The XSLT had grown organically over several years to solve each immediate problem and didn't seem to be designed. This wasn't a problem for XPath because it could walk backwards on the RTF and index based on keys contained elsewhere in the payload. I'm sure it didn't perform as well as some alternatives, but it is still a powerful and unambiguous language in that respect.
Another subtlety was their use of attributes in the XML payload that were simply interned as any other property in the JSON representation. I did some hand waving earlier by suggesting the original system was configured to emit JSON. That would eventually be the case, but for the POC it was transformed by an intermediate process that did a straight transliteration. This made the JSON a little less friendly because I had arrays of objects where they would have been better described as a dictionary, but I had to be more generic in that approach.
Once I was in the transformer portion of JS code, I converted the arrays into something more usable. This still meant nesting multiple loops as I iterated over the collections and built usable objects.
There was some attempt by the original programmer to use XSLT best practices. The transformation was broken up into multiple templates that were then applied, but there was too much reliance on conditional variables for my taste.
Arguably had the JSON payload been rewritten and structured better, the JS approach would have worked. But this was building a plane in mid-flight. I didn't have that luxury. I had to ingest JSONified XML and emit a text document.
To make the template easier to read in the source I defined it using Moustache. This was probably the biggest issue with the existing system. With all of the conditionals, it was next to impossible to know what the final transformation would look like. This is why the client was looking for other options, because the existing system was becoming too costly to maintain.
I think this is what I was trying to get at in my original comment. XSLT worked better as a data transformer. XPath made it easier to describe these transformations than I could do in JS alone. On the other hand, the JS was easier to maintain and if from your background you only know procedural languages, it was easier to read and write. The problem in this case was that the data they were using to drive the process was already in an XML format and XSLT was a known working solution.
I still think the best solution for this case would be rewriting the backend, but that wasn't an option. The next best solution would be to use XSLT to transform the data into something more manageable in JS, but the client wanted to eliminate XSLT entirely. My POC was the end result, for better or for worse. It was still an interesting project.
Anyway, I think I'd rather have a nice programming language that's also well-suited to data transformation. I'd ditch XML for something more elegent and flexible and have a transformation language to match. That's my dream, anyway.