Hacker News new | past | comments | ask | show | jobs | submit login
Striking parallels between mathematics and software engineering (oreilly.com)
78 points by datascientist on Jan 19, 2015 | hide | past | web | favorite | 33 comments



Linear algebra got a lot easier for me when I realised that matrices are just linear functions, and matrix multiplication is function composition.

Abstract algebra and category theory give you incredibly useful program and API structuring techniques; you can get lots of nice properties for free by following these well-worn existing patterns, even more so than OOP design patterns. We make use of them a lot in Haskell, but they’re basically language-agnostic.

Even simple algebraic manipulations, like noticing that “return” is distributive over a conditional:

    if x then return y else return z

    return (if x then y else z)
Are very useful for restructuring programs to be more readable.


> Linear algebra got a lot easier for me when I realised that matrices are just linear functions, and matrix multiplication is function composition.

As my former exercise instructor for linear algebra told to the math students at the beginning of the second semester: Those people who have not yet understood that matrices are more than a box of numbers are definitely at the wrong place here.



This one is beautiful as well.

It was mind-opening for me to think of types in terms of provability (intuitionistic/constructive logic) rather than truth (classical logic). The only way to prove that a function returning a value of type T actually halts (and therefore does not actually “return” bottom/void) is to run it and obtain that T value, i.e., to find the object that the type claims exists.


For the second return to work doesn't the if have to be an expression i.e return the result of evaluating either y or z as the result of the if expression? If the language has if statements like say Javascript then it won't work.


Yes. And that is why statement–expression distinctions are stupid, because they obscure these relationships. In C-land it would be “if-else” vs. “?:”.


This is why one of the most valuable uses of time for mathematicians and especially software engineers is to study the historical development of techniques and technologies. Understanding how matrices are a product of the quest for finding solutions to systems of linear equations gives you a much better idea for when and how to apply matrix techniques. Most math textbooks limit themselves to "The determinant is defined by ad-bc" which on its own is an almost completely useless fact to know outside of taking an exam.

Software engineering is much worse in terms of useless complications developed by people who don't know previous solutions (angular.js? Great! Have you ever heard of dataflow programming? Constraint satisfaction? Dynamic binding? No? No wonder angular is such a piece of shit).


To be a bit contrarian, linear algebra is waaaay more useful than just as an outgrowth of solving systems of equations.

My own point of view is that linear algebra is by far the most successful part of mathematics: 'Most' questions you can come up with have satisfactory answers. This is in contrast to, say, number theory, where there's a bunch of nice elementary results and a lot of interesting questions that seem nigh impossible to solve.

As a result, it's a pretty common game in mathematics to start with something new or difficult that you want to describe, and then do your level best to turn your questions into linear algebra problems so that you can actually get answers. The extent to which this doesn't work is the extent to which you need to develop new ideas. (One example of such an approach is algebraic graph theory. Turn a graph into an interesting matrix, and then use the linear algebraic properties of that matrix to describe interesting properties of your graph.)


Linear algebra is one of my favorite subjects. Its design is beautifully simple, yet extremely powerful. Half of modern machine learning (and all of Matlab) is built on matrix algebra. And the existence of fast software for numeric linear algebra makes it practically applicable.

The link to graph theory is beautiful, too. Entries in an matrix can represent edge weights, and taking a random walk on a graph can be represented as a matrix-vector multiplication, and the stationary distribution is the singular vector. How cool is that? When you start to link together abstractions from different fields of mathematics and science, you get these fantastic insights that are just mind bogglingly awesome. This is what makes all the pain of wading through an ocean of symbols and equations worthwhile, imho.


> My own point of view is that linear algebra is by far the most successful part of mathematics: 'Most' questions you can come up with have satisfactory answers. This is in contrast to, say, number theory, where there's a bunch of nice elementary results and a lot of interesting questions that seem nigh impossible to solve.

I view that as "if you can find a way to express your problem in terms of linear equations, there are well-known techniques for finding solutions."


I agree. I also always found I learned math better when it came from authors who knew the history of the topic and where the different ideas fit in with the wider landscape of mathematics or cs. Knuth and Terence Tao are two people who do this amazingly well. And I believe, really prioritises it.


Can you suggest any good reads which incorporate the history & motivation of linear algebra? I have had some experience with the common text books, but never put much effort into incorporating it into the way I think, precisely because it did not seem worth my time to just memorize methods without much context.


I recently enjoyed Bashmakova and Smirnova's The beginnings and evolution of algebra. It covers the development of algebra from Babylonian math to group theory. One of the valuable things it does is cover the evolution of notation for mathematical problems. Now next time someone complains that "Prefix is hard! Infix is natural!" I can throw this book at them.


'Proofs and refutations' is a fantastic book about the process of mathematical exploration, whose primary worked example ('What is a polytope? And what does Euler's Formula mean?') ends up being a non-obvious translation of geometry into linear algebra. It's a really wonderful read, and will give a bit of a sense of how mathematicians think about linear algebra, in a way that most textbooks don't.


If you can find a way to get a hold of it, Saunders MacLane's Mathematics: Form and Function is a tremendous book for understanding how math arises from the world and reflects it. What's written here is a lot more specific than that, of course, but if this illustration interests you then it might be a good book to try digging into a little bit.

On the other hand, while I am always cognizant and amazed by the POV of math as a human construction, there's something a little otherworldly about it from time to time. In the same way that you sometimes hit a code design which feels so damn good, Math, especially older math, is just a huge collection of these. Somehow, despite this all coming, apparently, from our minds, we hit on these design decisions that are just so sweet that they last millennia. This is what inspired things like Voyager---maybe it's hubris, but it just has to be the case that aliens speak mathematics.

So, I encourage anyone excited by this: math isn't "hard", it's just big and wonderful. You'll never finish learning it, but the journey will be incredible.

I'll leave this linking two more great resources (edit: to be clear, really the first one is the great resource... the second is just me talking, not really great at all). First, Paul Erdös, a famous mathematician who perhaps specialized in combinatorics, loved this idea I espouse above. In his mind, God had a small number of "proofs" in his mind when he designed mathematics. These are so wonderful that their beauty is completely self-evident. After Erdös "stopped doing math" (passed away) people compiled some of his Proofs from The Book along with others they imagine he would have so regarded into a great text book called, unsurprisingly, Proofs from The Book [0].

Finally, I'll self plug a little essay I wrote, actually in another HN comment, a while ago about learning mathematics.

http://jspha.com/posts/there_is_no_royal_road_to_mathematics...

[0] http://www.amazon.com/Proofs-THE-BOOK-Martin-Aigner/dp/36420...


The reason that you notice it more with older math, is that older math is older, and has had more time to be improved.

Imagine if you had an API that had been in use and continuously improved for millenia.


Continuously improved is important to note too. Backwards compatibility isn't really a thing in mathematics. You simply have to create enough value and teach people how to use your new mechanism.


As someone who grew up writing code, and is now studying mathematics at a tertiary institution, I was quite surprised to read that parallels between mathematics and software engineering are 'surprising'.

On the contrary, mathematics has formed the basis (no pun intended) of so much software engineering. Take for example the very concept of a function/subroutine/method/ - this comes straight from the world of mathematics (albeit with minor modifications to make it convenient).

The algorithms that do all the heavy lifting in order to facilitate this web browsing experience are all grounded in mathematics - memory management in your kernel, database {everything}, even HTML layout management! The whole of complexity and asymptotic analysis is actually just mathematics.

Many of the pioneers in computer science originated as mathematicians. Alan Turing, John McCarthy and Donald Knuth for example.

It may not seem like it on a daily basis writing CRUD apps in an OO language, but software engineering is inextricably linked to mathematics. Such results are the furthest from surprising!


You are confusing software engineering with computer science. Most HN users certainly know that the fundamental theories of computation are rooted in rigorous mathematics. The point of the article is is to share the insight that mathematical notation is itself a constructed system, much like complicated software implementations, and which via design decisions provides users with powerful abstractions.

Did you even read the article before lecturing us about Turing and Knuth??


If that's true then why do mathematicians seem allergic to improving their own language, while software developers seem to constantly invent new ones?


There's a concrete object-oriented Python implementation of much of abstract algebra is Sage. You can see some of that here: https://github.com/sagemath/sage/blob/master/src/sage/struct...

(Disclaimer: I'm a Sage developer.)


That's pretty interesting, thanks for sharing.


I've just started reading http://en.m.wikipedia.org/wiki/Where_Mathematics_Comes_From

It's about math as a human construction. Very comforting to see ones inner metaphores and mental models of math "legitimised" and exposed in a scientific framework. One could hope for a companion: Where Software Engineering Comes From.


Cool! Thanks for the link. I've not read that book, but it looks very interesting and related. It great to see others' perspective on math as a human construction.

Another book on the topic of history of mathematics is "Journey Through Mathematics" by Enrique Gonzalez-Velasco. From its back cover:

"This book offers an accessible and in-depth look at some of the most important episodes of two thousand years of mathematical history. Beginning with trigonometry and moving on through logarithms, complex numbers, infinite series, and calculus, this book profiles some of the lesser known but crucial contributors to modern day mathematics."


I've had a similar realisation regarding the parallels between software design and scientific theories. In science you have observed phenomena, and you try to find the best (preferably simplest) theory/abstraction that will both explain them and also be able to explain phenomena that you haven't observed yet. In software engineering, you have a set of current requirements and you try to find the best (and hopefully simplest) architecture/abstraction that will meet the current requirements but also be able to naturally accommodate requirements that you haven't foreseen. Both science and software engineering try to devise abstractions that try to convey some "higher truth" than the given data, and both invariably need to be reassembled into a higher abstraction when reality doesn't agree with them, but the test of a good scientific theory as well as a good software architecture is how long can the abstraction hold until something doesn't fit the model.


It wobbled around a little bit but the main idea that mathematics like software is a human construction and that just like all human constructions it is susceptible to encoding historical accidents as truths is a good point.

The other theme in the article about algebra and minimally acceptable abstractions for accomplishing a goal is unfortunately nowhere to be found in software.


Interesting. While I agree with the general premise (a lot of mathematics is a human-made construction), the concrete examples she provided feel somewhat forced. Especially the comparison to OOP - I personally don't see how it adds to understanding the various subdivisions defined by abstract algebra.


That's because the inheritance in abstract algebra is so bloody obvious. At least when I studied math, it was the first subject in which the template was:

Let's define something, and consider what we can figure out about it.

Now let's add another property, and see what we can figure out.

Now let's add yet another property, and see what happens.

Etc.

One winds up familiar with the whole progression group --> abelian group (although not much time is spent on that one) --> ring --> integral domain --> unique factorization domain --> principal ideal domain --> Euclidean domain --> field.


This sort of subtyping tree available in abstract algebra isn't the same as an inheritance tree. Inheritance merely layers constructors.

What you really want to consider is interface subtyping and layering. Actually, more than just interfaces you want to also carry along laws. For instance, a group is a monoid with an inverse operation appended (interface concatenation) cut down by the fact that the inverse operation must be "nice" (law concatenation). If you only append the interfaces you end up with free structures.


Object oriented can be read as "category theoretic, but with the arrows missing." And category theory certainly adds something to the algebra...


I don't think it is ever tenable to claim that mathematics is purely a human construction. The Formalist viewpoint is that mathematics is just the manipulation of symbols, according to certain rules. But Formalists accept that these rules regarding manipulation of symbols are themselves mathematical. That is, the Formalists are actually Platonists with regard to metamathematics. The Platonist says "1 + 1 = 2 is a meaningful, true statement" while the formalist says "that P is a correct proof that 1 + 1 = 2, is meaningful, true statement".


A middle ground seems reasonable. The naturals probably correspond to a physical phenomenon—it seems to be possible to have, say, 1 proton—but I have deep doubts about the existence of the reals.


Math is good for explaining some logic, but I wish it was more verbose and used less global scope.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: