Can you elaborate on what you're referring to? I can see performance becoming a problem if you repeatedly chain non-optimisable (in bytecode) as excluding the in place operations I believe all ops re-allocate the collection.
David Autor was recently interviewed by Martin Wolf on the effect of AI on jobs. The question of if its fair to compare a possible economic shock on knowledge work to the China shock in manufacturing. He had two responses to the question:
1. The geographic dispersal of knowledge work should allow retraining of displaced workers, in opposition to the loss of manufacturing jobs which centre around single employer towns.
2. The china shock resulted in a sudden drop in prices, whereas AI would lead to efficiency gains.
The second point, to me, feels more pertinent, and mixed with the first could allow for a freeing up of labour, ideally into higher value add work. I think the time horizon is also worth speaking about here, as most economists will be thinking in 5-10 years where we can expect substantial improvements in models, but barring new model architecutre, it seems doubtful that we'll see some sort of emergent intelligence from LLMs.
Post-ASI, knowledge labour necessarily has zero value, at which point the challenge is to design an equitable society.
Once you have algebraic data-types in a language, writing a recursive visitor pattern is pretty simple.
Encoding the semantics of a tree traversal operator likewise is difficult in the general case. What exactly would the order be, what if I want to traverse in a non-standard ordering, what about skipping branches; all would be difficult to cleanly represent.
I have seen it done where you return actions with key ones being recurse, stop, replace, and replace & then carry out some function, but again, this is pretty simple to implement.
Surely, however from a software architecture perspective types as contracts, assuming pure functions and maximally restrictive types, is a good model for reasoning about a system. A function such as:
addIfEven: Int -> Int -> Maybe Int
even without knowledge of the implementation lets us know our caller has 'entered into a contract' where addIfEven will reduce two integers depending on some conditions. This contract then becomes two-wayed: If I modify the implementation of the function there's a good chance I'll be forced to modify the signature, thereby letting me know statically that I've broken the contract with the callsite.
Surprisingly one of the best summaries (~10 pages) to applied linear algebra I've found is in Nielsen and Chuang's Quantum Computation and Quantum Information.
Presented primarily without proofs which whilst argubably can be limiting isn't relevant, at least for what their goal is.