Maybe you are right and there is some subset of mathematics so complex and lengthy that it becomes incomprehensible without a concise shorthand. I don't know enough about high level mathematics to refute that. I do know that the original article we are commenting on is not an example of that.
I do think by switching to code we don't need to lose the ability to declare new semantic meanings. But instead of inventing new ways to draw things, we can rely on our existing language. We define new functions. That's the beauty of having a standardised flexible language.
You write that mathematicians like to declare new notation. I think that's actually a liability. Like another commenter pointed out in this thread, mathematical notation is sufficiently freeform that there are different "dialects" in different fields of mathematics and it's not trivial to understand one even if you know the other. There is no formal definition, no formal grammar.
At the same time I just don't know if "H^ * (pi_1(RP^2)) with a line over RP^2" is actually better than
inducedHomology(fundamentalGroup(OnePointCompactification(Projectivization(RealAffineSpace(dimensions=2))))) nor that it's desirable to want to compress it down to 6 small symbols. Obviously if you are going to use such a formula a lot, just like in programming, you'd "refactor" it into a parameterised function. Now you have reduced the cognitive overhead of understanding every use of this formula going forward. As a reader, you understand that function once and when you see it again you don't have to reparse it or scan for differences from previous similar incantations. DRY applies to mathematics too.
Expressing the math in code, to me, means to express it with a small library of primitives, unambiguously and formally. (Regular mathematical notation is ambiguous, like the multiplication example I mentioned, for example, and can define new primitives as it goes along.)
Now combine that formal code expression with best practices for programming like sensible variable naming, DRY and so on and I really feel there should be some kind of tangible advantage. (That's before we even start to think about the possibility of unit testing parts of a greater work. Have there ever been instances of someone writing a physics paper and having a calculation error somewhere deep inside?)
At the risk of paraphrasing you badly, you write that you just don't think it's worth the time to express all the varied mathematical concepts in code because the math is too complicated. I feel the opposite. Because it's complicated, it'd be good to take the time to express it plainly.
I do think by switching to code we don't need to lose the ability to declare new semantic meanings. But instead of inventing new ways to draw things, we can rely on our existing language. We define new functions. That's the beauty of having a standardised flexible language.
You write that mathematicians like to declare new notation. I think that's actually a liability. Like another commenter pointed out in this thread, mathematical notation is sufficiently freeform that there are different "dialects" in different fields of mathematics and it's not trivial to understand one even if you know the other. There is no formal definition, no formal grammar.
At the same time I just don't know if "H^ * (pi_1(RP^2)) with a line over RP^2" is actually better than inducedHomology(fundamentalGroup(OnePointCompactification(Projectivization(RealAffineSpace(dimensions=2))))) nor that it's desirable to want to compress it down to 6 small symbols. Obviously if you are going to use such a formula a lot, just like in programming, you'd "refactor" it into a parameterised function. Now you have reduced the cognitive overhead of understanding every use of this formula going forward. As a reader, you understand that function once and when you see it again you don't have to reparse it or scan for differences from previous similar incantations. DRY applies to mathematics too.
Expressing the math in code, to me, means to express it with a small library of primitives, unambiguously and formally. (Regular mathematical notation is ambiguous, like the multiplication example I mentioned, for example, and can define new primitives as it goes along.)
Now combine that formal code expression with best practices for programming like sensible variable naming, DRY and so on and I really feel there should be some kind of tangible advantage. (That's before we even start to think about the possibility of unit testing parts of a greater work. Have there ever been instances of someone writing a physics paper and having a calculation error somewhere deep inside?)
At the risk of paraphrasing you badly, you write that you just don't think it's worth the time to express all the varied mathematical concepts in code because the math is too complicated. I feel the opposite. Because it's complicated, it'd be good to take the time to express it plainly.