Hacker News new | past | comments | ask | show | jobs | submit login
A Programmer’s Regret: Neglecting Math at University (awalterschulze.github.io)
239 points by jasim on July 30, 2019 | hide | past | favorite | 119 comments



I studied maths and computer science at university for precisely this reason - I heard many programmers say they wish they knew more mathematics. Honestly there have been very very few times where my knowledge of mathematics has directly helped me with coding, possibly even none.

Unless you continue to study mathematics, which you will likely only do if you stay in academia, you will forget what you've learned! It boggles my mind to think that I used to know advanced calculus, group theory, linear algebra etc. I have forgotten all of it bar the very basic concepts behind them.

That said where it has helped me is in logical thinking - stuff like abstractions as the article describes. Writing mathematical proofs requires logical precision which is very helpful when programming. If you spend enough time studying mathematics then eventually some of the core concepts will sink in and it will help in thinking of solution. Though I would not advise that you study it with the aim of thinking "How can I use this knowledge in programming" - its not directly applicable unless you are doing graphics programming, AI, data science etc.


"It boggles my mind to think that I used to know advanced calculus, group theory, linear algebra etc. I have forgotten all of it bar the very basic concepts behind them."

I thought it was just me!

edit: we should add confessions like DHH twitter where all these advanced programmers confessed they had to look up print() commands on google or whatever it was.

edit#2: The point might be that you don't remember linear algebra but you remember there is that tool to help. If multiple, parallel computations come up, you know to look up linear algebra. If a rate of change question comes up, you know to brush up on calculus. If the halting problem comes up, you know it isn't solved etc.


I don’t think learning something and forgetting it should really be justified, it’s just not an efficient use of time.

If you wanted the benefit of knowing tools that help with linear algebra, memorize that, there’s not point cramming the entire thing just for a class.

I think a good remedy to this issue is spaced repetition software like Anki/SuperMemo. It’s a bit messy to use with math, especially with something like a proof, and they’re meant for cards that shouldn’t take you longer than say 10 seconds to solve. It is doable though if you can break things down into atomic components. And it can be worth more than you realize to have usable access to those memories for say decades rather than till the month after your class

I think it’s difficult though to make classes focus on such long-term memories because tests create such poor short-term incentive structures which aren’t easily replaceable. I think the only way to get around this is to just focus on learning what you want and if you’re missing something that seems useful to acquire it on your own rather than being required to study something and just forgetting it as soon as the test passes.


I've also found Anki a good remedy to the "I feel bad that I don't remember this from undergrad" problem. I'm still nowhere near as fast at actually doing maths as when I was taking or teaching it. But now when some basic, relevant mathematical fact comes up in life and I've forgotten it, it just means there's one more thing I need to put into Anki, not that my mathematical knowledge is on a steady, irreversible downtrend.


Likely unwanted commentary but how do you do leech handling/minimum information principle with your math cards? I'm a SuperMemo user and I think one of the biggest failures I see with Anki, in general, is that it doesn't handle leeches well (leeches are cards you repeatedly fail that end up taking most of your time) and failure to adhere to minimum information principle (only 1 thing to recall per card). I think with math this would become even more difficult if you make minute long cards to go over some proof you will definitely be setting yourself up to suffer.


I'm only one year into using Anki and to be honest mathematics hasn't been a huge focus for me, so I'm not a great person to ask. Have you read this article by Michael Nielsen? http://cognitivemedium.com/srs-mathematics

I've noticed that I spend a lot of time on leeches and sometimes I go back to break them down into smaller pieces, but I haven't found it to be a major problem. How does SuperMemo handle it?


A long time ago I gave up the "manager track" and went back to be an individual contributor mostly because i noticed I was having to look up things I "ought" to have known

Slowly now I am settling into the idea (the confidence) that I am actually pretty good at this, and that occasionally looking up how to open a csv file is just an artifact of knowing why and what if not precisely how

(it's also a good indication that another abstraction that is easier to remember might be useful)


I’ve spent the last few months immersed in modern C++ after mainly doing C# at my job over the last few years. I opened up a C# REPL (can’t recall why exactly) and realised I wasn’t exactly sure how to print to the console....


Lol, I felt amazing the other day for being able to write an ajax request without looking it up after more than half a year of not coding in javascript. I was already looking it up while testing it and was actually surprised when I saw that it was working.


I tend to think of math as having two parts: a way of thinking along with all the things people thought up by thinking that way. To learn the first, you have to study the second.

Examples of the second are calculus, group theory, algebra, etc. Examples of the first are being able to define useful words with useful logical consequences, knowing when you can infer things beyond what can be directly tested, detecting patterns that might profitably be abstracted, etc. It's a bunch of force multipliers for your logical mind.

Specific subjects were developed with the goal of solving particular kinds of problems, so if you are faced with such a problem, it can be very useful to know what people have already figured out! However, if you're not, then it might not be so useful remembering these details.


>Honestly there have been very very few times where my knowledge of mathematics has directly helped me with coding . . .

> its not directly applicable unless you are doing graphics programming, AI, data science etc.

Well, those are pretty large fields!

If you ever get to touch machine learning, that's all about linear algebra, sparse methods, numerical methods, calculus in many dimensions (e.g. gradient descent).

Any kinds of image or signal processing, and you run into your old friend Fourier Transform, which is, again, analysis, linear algebra, complex numbers.

In my previous job, I got to have fun computing an orthonormal basis of polynomials on a disk, and doing some linear algebra with that (although to be fair, fields like computational lithography are probably more math-heavy than others).

The amount of programming jobs where math would be directly applicable is on the rise.

That's not why I studied mathematics. I do it because it is an art[1][2], and because I can. But it certainly comes with certain benefits in the workplace.

[1]https://archive.org/details/hardy_annotated/page/n1

[2]https://www.maa.org/external_archive/devlin/LockhartsLament....


I think you stumbled exactly into why people say that (including myself) - proofs.

With AI, ML, and Data Science becoming hotter and hotter topics, a lot of the material is still VERY mathematical proof heavy. This [1] is an example of understanding the Lagrange multipliers for SVMs. This is not a worked example, but a repetition of the mathematics. If you are shaky on what dL/db means, then you're lost. It doesn't help that the only other options are import sklearn tutorials, so you only get a slight understanding of the inner workings of something like SVM. You still can't do it by hand, if say for example, you were given a toy problem with like 10 items.

[1] https://towardsdatascience.com/understanding-support-vector-...


I don't think understanding proofs is essential to understanding AI papers.

A lot of papers are "dressed up" with proofs, many of them bad. But it doesn't matter if the technique can be effective.


Academic papers in computing don't tend to be written with lay practitioners in mind.

If I want to try out a technique in a business setting, I ideally want at least one of:

* A library or application I can run

* A code sample I can copy and modify

* A pseudocode block I can turn in to real code

What is less helpful is some mathematical equation in which one of the variables is the result of a technique described in another paper and a whole load of unfamiliar concepts named after people.


Agreed. In fact, code can be more expressive than mathematical notation, and providing code would do a better job of demonstrating a theory.

A lot of the proofs are just to legitimize papers. The cynic in me also thinks that by not providing code, the authors intentionally make it more difficult to reproduce results to hide tampered data. You see the same happen in science all the time with p-hacking, or with just plain fraudulent data that is purposely difficult to reproduce.

Luckily there are a good number of papers that provide a repo link, but not enough yet.


Sure, but if I'm reading a paper that starts to show equations, I can start getting lost. There's no "knowning" their garbage because it gets over-math'd. I get that's a sign of a bad paper, but my only argument for "why" is because I can't understand it. Not exactly the strongest defense.

The same for stats - if you are uncomfortable with them, then seeing terms like Wilcoxon and Spearman are just confusing. Should the number be high? Low? Or simply, why do some papers have one and others have the other? If you are shaky on stats, its easier to just accept a particular formula was used even if it is not appropriate.


My point was you can understand a paper without understanding all of the proofs shown. Following the proof isn't essential to understanding what the technique is.

There are papers with bad proofs but valid techniques. Because NNs techniques are difficult if not impossible to prove mathematically. Empirical data is much more valuable.


Right, but my point is what if I want to implement the math? Without deep diving into my research, I disagree with your second sentence entirely. It is why I think we are also seeing the rise in Jupyter notebook still articles - because you can explicitly follow along. As a rule of thumb, I try to assume no one understands anything and so I must be very explicit in anything I teach. I find this to be a better baseline than assuming everyone already knows what I'm about to teach.

Back to my point - if I want to implement something I've read in a journal article on something like queuing theory that I'd like to work into my CS education research, having a weaker foundation on math has made it harder for me to implement someone's work.


What you said is true, lacking the mathematical intuition will limit you. But understanding the proof is not essential to implementing the approach.


Let me shamelessly advertise a book for programmers I am writing for this exact case: Numerical Linear Algebra for Programmers: An Interactive Tutorial with GPU, CUDA, OpenCL, MKL, Java, and Clojure!

Drafts are available!

https://aiprobook.com/numerical-linear-algebra-for-programme...


Unless you continue to study mathematics, which you will likely only do if you stay in academia, you will forget what you've learned!

You've remembered more than you think, most notably you do (or would if appropriately prompted) remember that things are possible and probably a bit about the approach. You don't remember the details of how, but those are details you can look up if needed - you have the framework in which those details fit and that's the important thing.


Though for the most part I hardly use advanced mathematics in my day to day job, it is sometimes the case that math is like a super power. Where if I programmed the obvious idea to solve a combinatorics problem, the solution would never be tractable. But, then I learn the mathematics of combinatorics, and suddenly I can write a much simpler and much more powerful solution.

So, math doesn't help frequently (or perhaps not trying to use it enough), but when it does it can help tremendously.

It can also help deflate a lot of the hype in AI and ML, especially if you know the fundamental theory of ML.


> Honestly there have been very very few times where my knowledge of mathematics has directly helped me

And that is the intended use of math! Knowing math is like knowing judo or another powerful martial art. You take great effort to learn and master it, not for its "practical applications", but because you are honestly interested. One day, hopefully never, this knowledge will save the day, and you will be the only one around who has it.


I chose to study mathematics in university precisely because of the second reason: logical thinking. By extension that includes the ability to write proofs. Even if you aren't actually writing proofs on the job, getting into proof-writing mode will make your technical exposition much better.


>its not directly applicable unless you are doing graphics programming

You missed the point. It actually is directly applicable.

The difference with math and programming is that the axioms of a program constantly change. A reassignment, a mutation is an axiom mutated.

Otherwise they are the same.

You could invent a style of programming where all things are immutable. Then that style of programming will be virtually indistinguishable to math. I wonder what I would call such a style of programming?


You describes how mathematics can be used to create a new programming language/style.

The OP describes how using current programming languages requires very little knowledge of mathematics.

Both of those perspectives can be true and from my own personal experience the OP's perspective rings true.


No my argument is, if you program you are doing math. That's why the techniques from math are applicable to programming. If you know advanced math you can become a better programmer.


I feel like writing almost the opposite: An Engineer's Regret: Focusing Too Much on Math.

I did a lot of calculus, some functional analysis, numerical methods, etc. I've never needed even basic calculus for my engineering job. I did use it a few times, but mostly because I was looking for an excuse to use it. No one cared, and demonstrating such skills played no role in my annual review. Furthermore, everyone around you has forgotten this stuff, so the system will shift to not valuing math.

Of course, jobs do exist that need heavy math. They are not the norm, and there is a lot more competition to get those jobs. With the exception of machine learning, your employer will value you as a SW engineer more than a math wiz, precisely because the demand for the former is much higher compared to the supply.

(I honestly don't regret it - but the value of knowing advanced math in the professional world is overestimated).


The thing is, software engineer salaries are going down and data scientist salaries are going up and the data scientist jobs are taken by people from the sciences, who know that kind of math, that CS courses maybe are not that heavy on. So there is a great big incentive to know "maths" (continuous maths). And if one does not know, the incentive is for one to learn.


I don't know if what you are saying is true. Where is the data?

But I do believe in the long run, you will be right. Software is saturated, and a lot of the infra has been built or commoditized. The value will be there for people who can analyze data.

Too bad I don't find analyzing data as fun or as interesting as banging out a programming problem.


I think both are held together.

If you have a lot of data, but your underlying system is simple, you don't need a lot of people to analyse it. Your demand for new people crunching that data will come as the complexity of your software grows.

Think of it this way: having 10 endpoints and 10TB of plain-text data vs 1000 endpoints and 1TB of plain-text data. The later will surely require more time to be analysed, even though the amount of data is smaller.


Disagree.

Software is saturated and consolidated; the infrastructure has been built and is owned by large companies. And that trend will continue.

Data on the other hand is still growing, and there is enough already that has not been analyzed. Traditional software engineering will still be important, but no longer glamorous. The future belongs to the data analysts.


Interesting, can you give a pointer to the data on software engineering salaries declining?


It seems you agree with me:

>With the exception of machine learning,


I feel the post is misguided. Pick any career out there and you'll find that in a complex enough cranny of it there is mathematics. That doesn't mean that plumbing IS doing mathematics, or fixing the washing machine IS doing physics; at least not in any substantial sense. If you're cutting web APIs, writing wrappers, wrangling pixels; you wont find maths useful to you very often.

If you find yourself working in data science, cryptography, compilers, formal design, etc then obviously maths will be indispensible to you. Having said that most the maths that you need in order to pick something up and implement it is within your grasp and you'd probably have to learn it anyway even if you did know some maths. Recall, there is no "know maths"; you only ever know some maths, and once you've done it long enough you develop an ability to "do maths" which is in some bizarre way independent from any actual maths you know.

As a final note and honestly to hell with lamenting the past. I think if I could do it all over again; I would... and when I did I'd most probably want to do it all over again, again.


Math is a subject that you don't understand why you need it till you see real-world applications for it, especially higher-level math. Rather than teaching math in a way where you solve equations, students need to be taught to 'think' mathematically, similar to mental models. In fact, a good number of mental models have a strong grounding in mathematics. Seeing the world as a set of mathematical problems gives you a leg up on a number of things.


The thing is - if Programming corresponds to Maths (according to Curry-Howard) what is really happening is Mathematicians and Programmers are disagreeing over language, not over substance.

Mathematics is a formal language. As such - it's part of the Chomsky hierarchy.

And if it's part of the Chomsky hierarchy - we can build parsers/interpreters/compilers for it. So why not standardise it all into a shared library?

That's exactly what Voevodsky did. https://github.com/UniMath/UniMath/tree/master/UniMath

As computer-assisted proofs become more and more popular, expect plenty of cross-polination between software engineers and Mathematicians.

Code refactoring. Unit testing. Managing a large, shared code-base. Fun times ahead.


> As computer-assisted proofs become more and more popular, expect plenty of cross-polination between software engineers and Mathematicians.

Similar things were said in the 1950s and 1960s. I don't remember specifics but our lecturers told us that computer scientists back then thought that they could actually just mathematically express a program and there would be no need for a programmer.

The Vienna Development Method has been around for god knows how many years and have never caught on.

Saying that programming is like maths, is the same as saying cooking is like chemistry. Sure there is lots of chemistry going on in the food, but I doubt many chefs know or care about the chemical properties of organic compounds.

Additionally we actually did a course on VDM back in 2006/2007 and the lecturer said that he had only encountered one team that proved their program to be correct and it took them about a decade.

You could argue the code itself is the mathematical expression. But honestly as someone that has a very strong Maths background and moved to Software Engineering most software unless you are doing something very specific like a sorting algorithm a lot of code isn't really that algorithmic, you are in the vast majority of cases just expressing a set of business rules and that is in my experience is best described by Use Cases / User Stories.

For most businesses this is simply a waste of time. Getting developers to write tests is hard enough and businesses don't see the benefit until things start going horrifically wrong.


So, I remember once as a junior dev, I was asked to write a SQL script to get some data from some tables, etc. I wrote the script and it was taking for ever to run. I scratched my head, had a close look and it was obvious that my script was running in quadratic time (plus, you know- expensive joins and stuff). I rewrote it to run in linear time and, hey, presto- the job was done in a flash.

That is how programmers use maths to do their everyday work.

Note that I don't mean you need to know that what you're doing is asymptotic analysis and so on to optimise your code when it gets bogged down in unnecessarily expensive compuations, but even if you don't call what you do what it is formally known as, you are still doing that thing. And that thing is called "using maths".


It always boils down to use-cases.

Ultimately, what formal verification gives you is a guarantee of structural correctness, but not behavioural correctness. Whether a program behaves as intended or required is still a task for humans. Garbage in - garbage out.

Whether structural correctness is a requirement is a function of the software's criticality and purpose.

You probably don't need it for your blog...

There is one critical FYI. If you require formal verification of your code you have to give up Turing-completeness. https://en.wikipedia.org/wiki/Total_functional_programming


This is a facetious comment that others have made: https://twitter.com/pigworker/status/913454521610842114

One issue that it glosses over is that in mathematics, the object of study is a-priori truths (and mathematicians are usually Platonists). I would express this as saying that math is interested in knowing what's true. So obviously it is applicable to proving things about programs. So far so good.

But programming involves a lot of work which could be described as a-posteriori. As an example on dictionary.com puts it, "an a posteriori argument [...] derives the theory from the evidence". Programmers are designers, wrestling sense out of complex and sometimes poorly expressed specifications, requirements, and realities. This doesn't map onto mathematics: it's neither (in Gowers' terms) theory building nor problem solving, because mathematical theory building is an a-priori business dealing reflexively with mathematical tools, not with theories of the outside reality. A typical large software system is an unwieldy, organic thing, to which mathematically formulated theories apply in the same way as they do to biological organisms. Sometimes math can describe aspects a complex system well, but it can't tell you how to build it, any more than it can tell you how to build the Parthenon.


A-priori truths are called axioms.

In the context of the Curry-Howard isomorphism you don't "prove things about programs".

Programs ARE proofs. Not all programs, mind you. You need to be using a dependently-typed language like Agda, Coq or Idris.

The link in my post to the UniMath project is a bunch of codified proofs to various theorems in Category theory, Topology, Combinatorics etc.


I don't agree that a-priori truths are limited to axioms—it's a much larger category of "necessary" truths. A theorem which has been proved is an a-priori truth.

Sure, programs are proofs in the context of Curry-Howard. But not necessarily proofs of what you want them to prove.


If it has been proven from your axioms (really down from the axioms!) then you can add it to a library of statements you 'know' (or in the case of your axioms, assume) to be true.


More succinctly: The thing you want proven is a proposition.

A proposition is a type. Writing the algorithm which implements the type is the same as proving it.

https://ncatlab.org/nlab/show/propositions+as+types


A theorem is a consequence of a set of axioms. By definition.

It may be an a-priori truth in the context of the particular proof you are working on, but a-priori your theorem there is always an axiom.

Wanting to prove that X is a theorem of a-priori truth Y is the same thing as writing the algorithm for f: Y -> X


Thanks for explaining what I was trying to say much better than myself.


...maths is not a formal language by any stretch of the imagination. I think about the best definition one can give is maths is what mathematicians do, and maths notation is what one mathematician writes down to communicate mathematics to another mathematician.

It's circular in about a dozen ways but that is as it is IMO.


I'm fairly certain that Hilbert would disagree with you:

https://en.wikipedia.org/wiki/Formalism_(philosophy_of_mathe...

In particular, most modern mathematics can be formalized within first-order logic with the ZFA axioms:

https://en.wikipedia.org/wiki/First-order_logic

https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t...

As a concrete example, the metamath project has formalized (and formally verified) huge chunks of mathematics:

http://us.metamath.org/mpeuni/mmset.html

Certainly there are aspects of mathematics which are not formal (beauty, parsimony, elegance, clarity, geometric intuition, etc.) but saying math isn't a formal language is like saying chess doesn't have rules.


He's not saying mathematics doesn't have rigorous foundations.

Betrand Russell's Principia Mathematica is computation. But math textbooks/papers are not like this. They share insights, opinions, and metaphors, and frequently explain things in non-rigorous terms, although these are not substitutes for proofs even though proofs for something given a non-rigorous explanation may not appear.

In less heavily proof-read contexts, such as in discussions or a class room, it's not uncommon for mathematicians to be pretty free-wheeling with their notation, and leave it up to context and intuition to figure out what an individual symbol means at a given time. And it wouldn't be a faux pas, or sign of a lesser mathematician or anything like that.

These things can be valuable and in some sense are a part of mathematics.

Although the end results of mathematics can be distilled into formal computations, it's far too reductive to say that's all mathematics is.


The exact same thing can be said about any human dynamic where the objective is to teach.

The end-goal is to make the material accessible to the learner by connecting it some pre-existing human intuition or experience, thus providing the learner with a vocabulary to express their thoughts.

The important question is how are the vocabularies (both formal and informal) produced by Mathematics different from the vocabularies produced by Computer Science when both fields are working on the same real-world problems?

And if the two vocabularies are of equal utility - do we really need two different tools for the same job?


Computer science and mathematics do not work on only the same problems or produce the same results, except for the parts which overlap. Unless you're attributing results by automated theorem provers as results in computer science?

The point of mathematics being expressed in these ways isn't just teaching. Mathematics may be abstract, but it is still fundamentally abstracting patterns which arise from the human experience (for the most part). Tying mathematics back to human experience is an important aspect of the discipline. But another important reason is to engender further research and advancements.

What you're saying is exactly asking whether programs can replace programmers, since you can design a program which generates programs.

Mathematics and computer science are human disciplines with results whose purpose is to find results which are of interest to humans. At this point we are able to sometimes verify theorems and occasionally prove a theorem automatically. But ATPs are unable to write comprehensible proofs, tell what corollaries may be of interest, or to autonomously build off of theorems in directions which would interest us. At this point they are not necessarily able to prove theorems in reasonable time frames.

Furthermore the results produced by vocabulary are not just theorems. High quality proofs are just as valuable. They tell us why things are true in ways which are significant or profound. Although the 'why' is not necessarily useful, it is still of interest to mathematics.

Perhaps ATPs will be able to accomplish all this eventually. But currently they don't provide the same utility as mathematics.


I think you are missing the point.

Solving a problem using mathematics is EQUIVALENT to solving a problem using computation (which is a silly truism, because your mind is a computer).

They are two different mind-instruments, two frameworks for thinking and thinking about thinking. They are functionally identical

What you can do with one - you can do with the other.

Because Mathematics can only prove things up to an isomorphism. And Curry-Howard is an isomorphism.

Whether any particular solution can be automated is not my point at all.

My point is that when a human solves an abstract problem - they express the solution in language.

Some humans speak Mathematics. Some humans speak Python.

The divide between Mathematics and Computer science is cultural, not grounded in utility.

Hence why - I would love to see some unification.


>I'm fairly certain that Hilbert would disagree with you

Hilbert failed. Besides actual mathematics are not done in a formal Hilbertian way...


Where Hilbert erred was looking for foundations in first-order logic. Univalent foundations are laid on high-order logic.

It's the distinction between top-down and bottom-up thinking.

If you are into philosophy/epistemology - it's somewhat of an attempt to shift Mathematics from foundationalism towards structuralism.

I recommend this video: https://www.youtube.com/watch?v=O45LaFsaqMA

Hilbert's dream of Metamathematics has sort of come true in the form of homoiconicity in programming languages.


What mathematicians DO is computation.

What a mathematicians WRITE is a formal language used to express computation.

Manipulating mathematical expressions has a name. Symbolic Computation. https://en.wikipedia.org/wiki/Computer_algebra


One can view it as "language" at multiple levels.

In a computer language, there is a lexical level, and then there is a slightly deeper syntax level. Beyond that, there are levels of language, too: JSON or XML documents can follow one schema or another, for example. This schema defines a language on top of the JSON or XML language. (And you can do the same thing with any other data structure.)

In math, the same sort of levels exist. At the most superficial level is the notation. At that level, I don't think mathematicians use formal language. They're free to invent a notation or even misuse a notation in a proof. There are no absolutely fixed rules, the complete set of rules does not have to communicated up front, and when push comes to shove, anything goes as long as it's clear to another mathematician.

But at a deeper level, they deal with formalisms and manipulate information in a formal way. If you want to call that information manipulation language, I think that's reasonable because it can be used (and agreed upon) in an exchange of ideas.

For example, do you want to say the Fibonacci sequence starts "0 1 1" or "1 1"? It's just a definition thing, but if you and I agree on one or the other and then start talking about the Nth element of "the Fibonacci sequence", we are communicating with each other in terms of that. (And it's formal because we are basing things on not deviating from that.)


>when push comes to shove, anything goes as long as it's clear to another mathematician.

Well, yes! That's not inherent to Mathematicians. That's how all human communication works. That's what we use "natural" languages for.

What we use formal languages for is to minimise the ambiguity, imprecision and non-determinism (entropy) in natural languages.

Perhaps one way to put it: the purpose of natural languages is communication and context-setting. The purpose of formal languages is precision.

That line gets blurred in computer science (and in my own mind) because of Homoiconicity.

>For example, do you want to say the Fibonacci sequence starts "0 1 1" or "1 1"? It's just a definition thing

I don't want to say absolutely anything about Fibonacci in a vacuum.

The problem I am solving WITH Fibonacci will dictate whether it needs to start at "0 1 1", "1 1", or whether the the difference is immaterial.

Off-by-1 errors are just a fact of programmer's life. We make those decisions on case-by-case basis.

> if you and I agree on one or the other and then start talking about the Nth element of "the Fibonacci sequence", we are communicating with each other in terms of that.

What you are pointing out is problems of convention and consensus.

Off the top of my head I can think of a handful of strategies for coming to agree on the 0th value of the Fibonacci function in English. I bet you do too - all those things are intuitive to humans and we take them for granted.

In computer science many of those problems have already been formalised, and some have been solved.

https://en.wikipedia.org/wiki/Consensus_(computer_science)

P.S You have been using the concepts of "communication" and "information". Shannon formalised those!

P.P.S Fibonacci is an numerical formalisation of the Golden spiral.

In a nutshell, what I am pointing out is succinctly expressed by Donald Knuth.

Science is what we understand well enough to explain to a computer. Art is everything else we do.


The Chomsky hierarchy covers all of that.

It's linguistics for formal languages. Grammar, syntax and semantics together with reflection allows for formalisms and meta-formalisms.

It all culminates in Turing Completeness. Chomsky Type 0 grammars.

If you find something more powerful than a Chomsky Type 0 grammar - you have made a ground-breaking scientific discovery.


I'd say what mathematicians do is "think". We call the sort of thinking mathematicians do "mathematics". I get through quite a bit of grad stats in any given week and I'd say that what I'm not doing reading the computations of other mathematicians. I'm reading words in english, which try to explain concepts which are then sketched formulaically. I'm also reading proofs that are usually completed by a "mathematical" intuition and are very rarely if ever constructivist.

Maybe, that's just me busy thinking whilst the rest are busy computing :-)


By the way - this is rather circular and it begs questions.

> We call the sort of thinking mathematicians do "mathematics"

What kind of thinking do mathematicians do? Is it any different to the kind of thinking computer scientists do?


Once you are done thinking your thought() function returns a result in the form of language, no?

Or does thought() fail to halt?


Mathematics hasn’t been “computation” since the days of hand-tabulated lists of logarithms.


A lookup table is a particular case of a space-time trade-off.

https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff


Sure, if you restrict yourself to intuitionistic logic... which is odd (to use the most charitable description I can give of it).


I need to go home and eat a bunch of weed edibles so I can read this comment.


Update: So I got drunk instead (which has basically the same effect on me).

Programming is math, except in the real world you have to deal with countless technical and business constraints. So unsurprisingly, theory != practice. The substance might just change in practice because a theoretical answer is completely impractical.

A formal language? Mathematics is a language of hand-waving. And I say that as a person who _loves_ math, even though I didn't take math beyond calc in college. When you translate math into algorithms and real programs, you begin running into problems that shape your solution. Really, you could come up with infinite different solutions to the same problem, all with different characteristics, use cases, strengths and weaknesses. There's no point in making a shared library, because it would be too fucking big and confusing. Better to formulate a solution yourself at this point, if you know what you're looking for, and tune it as you go.

I expect little to no cross-pollination because mathematicians love math and love their own work and love not having to deal with implementation issues, and have no desire to learn anything you mentioned. Unit testing, refactoring, ...none of it. Even many CS profs don't care, it's just the way it is -- a love of powerful ideas, not so much the dirty work of making them a reality. They theorize, and write off everything else as "applications." Nobody wants to write a beautiful algorithm, that also has to handle the sizes of different data types, and consider that not all the data may be in memory, and so on.


I do sometimes think more math might unlock more options for my software career (like if I had a better grasp on calculus I could be more effective at 3D graphics and simulations, for example).

But right now, I probably use math more often in my main hobby than I do when I program, and that hobby is board game design.

I'm often resorting to math to figure out how to make sure my designs are balanced, how many cards I should use at different player counts, how many cards I should include if I have different combinations of symbols on them (first time I've had to break out combinatorial functions outside of school), determining and balancing probabilities of different things happening, recording the results of multiple playtests and compiling and analyzing various statistics from those playtests, etc. Some designs for my games have even been inspired by game theory, computer science structures, fractals, etc.

One of the most prolific game designers out there today is Reiner Knizia, who has over 600 published games, and has a doctorate in Mathematics. I can see why. There's all sorts of neat fun things that can be found by probing different features and patterns in mathematics. What I've been trying to do is find corners he probably hasn't discovered himself yet, and considering I'm only aware of about 50 of his more well-known games, so probably the concepts I think are pretty new could very well be hiding in one of his lesser known 550 other games. Several times I've come up with an idea, only to bump into one of his designs a month or two later that does something similar.

So if I can find a bunch of uses for math for game design, there's probably a ton of potential applications I could see if I directed those energies more towards software engineering. And learn more math.


Another place where I wish I knew more math is Programming Language Theory. Proofs, abstract algebra, Denotational semantics etc. This would've let me work through books like Software Foundations, TAPL etc.


Aside from probability, what math do you use in board game design?


Lots of things, especially with patterns. Let's take scoring for example. Set collection is a common mechanic in games, where you try to collect sets of various things, and generally the more you collect of something, the better you score. But when you're designing the game, how much should that increase?

First there's linear, like 1 is worth 1 point, 2 is worth 2 points, 6 is worth 6 points. For that, there's not a whole lot of incentive to encourage people to make sure they get more cards in that set. But one way in which it could work is if they can only score in certain sets, so they prioritize them because that's the only way they can score it.

But if you want to encourage players to go after a set that they already have the most of, you need to have each one worth progressively more, and a really good pattern for that is the Triangular number pattern. If you recall Pascal's Triangle, the triangular numbers are the third row down, and the pattern is basically 1 (+2) = 3, 3 (+3) = 6, 6 (+4) = 10, etc. to give you a pattern of 1,3,6,10,15,21,28,36. So if you only have 2 of a thing it's only worth 3 points, but if you get 6 of a thing it's worth 21 points, significantly more. You'll see this scoring pattern quite often in set collection games. Offhand, I know Ponzi Scheme and St. Petersburg use it, but I've probably seen it in at least 20 games. If you're not a mathematician or you haven't encountered it in a bunch of other games, you might not recognize the pattern or realize that it's a good way to score set collection.

Another way that is used less often, because it's a much steeper increase, is exponential scoring, i.e. if you have only two items in a set, it's worth 2^2 = 4 points, but if you have 8 of them it's worth 8^2 = 64 points. But if you want the person to win to (almost always) be the person who got the biggest single set, and make people really battle it out just to get one more card in their biggest sets, then this is the way to go. I've seen it work really well in The Rose King, which even includes an exponential chart on the back of its rules going all the way up to like 32 squared, because it's possible to get groups that big (although rare). Apparently an old Alex Randolph game called Good Neighbors also used this.

One I've been playing around with lately is multiplicative, i.e. you multiply the numbers of each of your sets together. So if you have 3 in one set, 4 in another set, and 5 in a third set, your score is 3 * 4 * 5 = 60 points. It opens up an interesting dynamic where sometimes it's better to get cards in a new set than to keep going in one set, but only getting one card doesn't do you any good, and sometimes you'll score more points getting cards in a set you have very little of instead of getting more cards in a set you have a lot of.

Then there's an old one Knizia likes to do where it's basically doing a minimum function on your sets, where you score the set you got the lowest number of points in. He uses it for several of his games, including what many consider his masterpiece "Tigris & Euphrates", as well as "Ingenius". This scoring method forces the user to have to try to get a variety of sets and not to ignore any set, because whichever set they don't get as much of will be the one they score.

Another one I haven't seen too much of, but I've designed a game using it and seen at least one other game that uses it (That's Life! by Wolfgang Kramer, uses it for a single small module in one of its expansions) basically uses a Sine Wave for scoring. If you have an odd amount of a set, you score positive, and an even amount you score negative. Back and forth like a sine wave. It has an interesting property in that the more you get of something the better you score, but at the same time the more you try to get something and fail to make sure it stays an odd amount, the worse you score. So it's like walking a tightrope for that set, and the more you get the better it is and the more dangerous it is at the same time!

Another scoring method Knizia used to good effect in his card game High Society, is kind of the same concept as outliers in statistics (this one is a bit of a stretch, admittedly. There might be a better math concept this applies to). With a set of statistics, you often toss out outliers to get a more even distribution. Well in High Society, it's an auction game where you try to score the most points. However, whoever spends the most money to get those points over the course of the game, loses. Tossed out like an outlier. So you want to do what you can to score the most points, but you have to hold back enough so that essentially you're the second place winner. But of course everyone is trying to do the same thing, so it leads to very tight auctions and gambling when it's safe to bid a little extra to make sure you score something at all.

So yeah, those are several different methods of scoring based on different patterns in mathematics. And that's nowhere near comprehensive, I could probably come up with a half dozen more. As a designer you need to choose which scoring method is best for your game (or you can look into mathematics to find other possibilities).

And that's just scoring. There's all sorts of different aspects of games where knowing mathematics and patterns can help you with your game design.


A regret: never being able to afford university and always feeling like it was too late when I finally could.

It turns out I really enjoy type theory, category theory, and formal mathematics. I also care about liability, reliability, and safety. And all I can do is study in my own time, which I do freely, in order to catch up. And I will never get a research position or work at a startup where I can put these skills and ideas to the test.

Oh well. I will still work on the libffi integration in the community Lean fork and will try to get Lean working on AWS Lambda and GCP, etc.

If you're younger and starting out -- don't neglect maths! It is far more useful than the programming language du jour. The former is how you weather the constant flux of the latter. And how you solve hard problems.


Interesting point, reminds me of this article: https://www.supermemo.com/en/archives1990-2015/articles/geni.... One of the points it makes is basically that abstract thinking is more valuable than specific facts, and that math is basically the king of this. I hadn’t really thought about comparatively weak something like programming can be since a strong focus of it is syntax (if programming and computer science are somewhat seperated).


I would agree that the ability to think abstractly is a fundamental skill to designing systems using software. Maths is a good tool for this and it has aided engineers and scientists for centuries.


Can you tell me more about your project to get lean working on the cloud providers?


Sure!

The high level goal is being able to write proof-carrying code in more places.

I think it's important for us to be able to guarantee properties/requirements of our programs and one way to do that is with formal methods. Lean is not only a proof assistant but also a dependently typed pure functional programming language with a decent VM. I would like to be able to ship the program from the proof.

I'm starting from the low level of the Lean VM by adding libffi support so that we can write high-level bindings to C libraries and bootstrap the Lean ecosystem. We need libraries to call databases, parse JSON and other serialization formats, speak HTTP, etc. The goal is to be able to ship an AWS Lambda function written in Lean.


Sounds very cool. Is there any way to follow your progress?


I'm a self-taught programmer and I turned that into my career. I didn't go to college and I didn't pay a lot of attention in Math classes in High School. I'm regularly reminded that I should have focused more attention on math. There are occasional challenges where understanding complex math would have been extremely helpful in solving real problems.


I think the most important thing is to perhaps be cognizant of your limits and strengths, and try and learn about what's out there that could be employed to solve problems, even if it's not you doing it.

I once worked with a guy with a PhD in math, and was really impressed by some of the stuff he was able to do. On the other hand, his coding skills were not stellar, and I was able to give him a lot of advice and help in terms of improving the structure and quality of his code.

Working with diverse teams is fun!


I was in the same position and did some research to get better and I seriously recommend starting with Gelfand's book on Algebra (and another Gelfand book on Trigonometry. These are educational books developed by one of the best Soviet mathematicians translated perfectly to english. It starts with the absolute fundamentals of Math, but asks such critical questions that you develop a much more nuanced understanding of maths you thought you already understood.

From there I plan to slowly make my way through the entire Gelfand library and Spivaks Calculus. Maybe down the road Category Theory and Engineering Maths, or Physics.

There are so many amazing resources online that I don't see any reason why a motivated learner with spare time can't achieve an education on par or better than a University undergrad.


This 100%. So is there any good way to revisit all the maths in a correct order of operations as they would apply to a topical area of CS in general?

I've picked up a few overview text books and have bookmarked a copious number of math related refreshers but the subject is hard to gauge if the material is good or not. Any MooCs have good tracks that don't cost an arm and a leg?


This isn't a popular suggestion (and by that I don't mean to say it's rejected or people don't like it, I just haven't heard it suggested before in this context) but at university for electronic engineering we used K.A. Stroud's Engineering Mathematics. This book is surprisingly little focused on actual applications to engineering, it takes you through calculus by introducing the derivative, for example, and then some linear algebra stuff. But what surprises people is that it starts off with the properties of addition and multiplication - it's that simple. It's a book that starts from zero and takes you very, very far. It won't take you to a mathematician's 100 but it'll take you to any serious engineering undergrad's 100.

If I recall correctly it also has problems for you to do - which is key for understanding mathematics and developing a sense of intuition.


I mean, the college math I had was: calc 1-3 and differential equations (4 classes total), and probability/statistics (1 class). I didn't have to take linear algebra, but I got a good enough understanding from calc and my algorithms class (we studied algos from a theoretical perspective with math, like the CLRS book, so we did some vector and matrix stuff).

Paul's Calc Notes are awesome for calc. But it seems like you have plenty of resources right now, so probably the best thing to do now is actually pick one at a time, and go through it.


The best collection of resources I've found has been on https://www.reddit.com/r/learnmath/ of all places.


Find a well-regarded college math curriculum and read the textbooks?


Similar situation, and I'm considering going back to school. Well, except, I forgot a ton of basic mathematics! Even made a thread about it a couple weeks ago.

https://news.ycombinator.com/item?id=20446796


I get the feeling that when the article says "math[s]" it means continuous maths, like matrix arithmetic, calculus and statistics (judging by the examples given). Those can be useful, but not as immediately useful as the discrete maths for computer science that most programmers are (er, I think) at least somewhat familiar with: propositional and first-order logic, combinatorics, complexity theory, computational theory (including automata and languages) and, well, binary arithmetic.

Not to mention: algorithms.

I see the reference to category theory. That's discrete, of course, but it's intermediary to advanced. You don't get there without some solid foundations on logic, that you should expect to get from your CS course.


I agree that discrete math is even more widely applicable to programming than continuous math, but I would add one quibble to your comment: matrix arithmetic is not necessarily continuous — Shamir secret sharing works by doing polynomial interpolation in a Galois field, and that's linear algebra, even if you never materialize the Vandermonde matrix. Similarly, Peterson–Gorenstein–Zierler decoding of Reed–Solomon codes — one of the most-widely-used ECC schemes — is grounded in matrix algebra over finite fields too.

Myself, I've just started picking up some continuous math after years of neglecting it, because graphics, sound, probability, physical simulation, convex optimization, and neural networks are all continuous math. I feel like I've been really missing out!


I was a math + physics major in college, but I work at an outfit that has a lot of engineers. In my observation, most of them are weak at math -- they got through it OK in college but it didn't come alive for them, which is probably not their fault. And as they start their careers, they can easily get productive enough without using their math, whereupon they forget it pretty quickly.

This isn't a slam on anybody -- these are bright and capable people, and I envy their success.

A handful of people, maybe 10%, gravitate towards the problems that involve math. Maybe they took a personal interest in math while in school, or maybe they're using math to cover up a shortfall in some other area. They become the "math people" in the department, and everybody brings quantitative engineering problems to them. I'm one of the math people, not even officially an engineer, but I volunteer to solve problems that other people hate. I'm actually not as successful in my career as some of the engineers, but at the same time, I'm doing OK considering that I got a more esoteric degree.

I think we could teach math in a way that's more relevant to the people who might actually use it, without detracting from what makes math come alive for math people. Let's teach more computation and proofs, perhaps from the git-go. Computation is how most people solve problems anyway. Proofs offer a much richer palette of ideas and styles than memorized "forms" and algorithms.

Granted, any reform of math education suffers the same pitfall as contemporary and historical methods: Massive attrition. This is the huge unsolved problem in math education.


I think the biggest problem of math education is that it’s not taught in a way that maximizes long-term memory. There’s a lot of research on it but I think there’s lots of methodology in teaching on how to make students remember better that’s underutilized. An example: interleaving old problems in tests and homework. Doesn’t take much work but from what I remember there are studies that show significant improvement in final tests scores and likely in overall recall. Depending on students to review older material on their own isn’t a good bet.


I did a dual major in math and computer science, I recommend it if you are aiming for a research career like me. It helped me considerably in AI and mobile robotics, and I think in some ways it put me ahead of people who "only" did CS. I guess one of the reason for that is the gaping absence of math in the CS curriculum at my uni.

Of course the issue is that nobody knows prior to choosing their major if they want a research career! There's no point in studying topology and measure theory like I did if you end up being a front-end dev, apart from personal development.


To each their own, but a lack of math skills has rarely hurt me in my career. I guess I could be getting paid even more money as an ML specialist if I had more math knowledge. But you can make absurd amounts of money as a skilled generalist, so that only hurts you if you really find meaning in that type of work above what you'd do as a generalist programmer. And if that's where you're at, you can always study the math now.


To each their own, but the truth is that life isn’t just about money, and if someone knew maths, there’s a higher chance they can be getting paid the same amount of money working on more interesting problems surrounded by people who aren’t thinking only of money.


I'll stop thinking of money when my survival stops depending on it.


Similar story here - went to college, majored in CS, found out how much of CS is math, thought it was just there to be difficult, learned just enough to (barely) pass. It wasn’t until much later that I did some graphics programming that I wished I had paid closer attention - I’ve been going back and trying to re-teach myself a lot to the stuff, like differential equations, that I probably would have gotten a lot quicker and easier if I had just paid attention when it was the only responsibility I had along with access to an expert on the topic who I was actually paying to help me learn it. Oh, well, some of us have to do everything the hard way…


It would really help if professors would give you a taste of something applicable before diving deep into the math itself. I suffered through most of differential equations and calculus, but I happened to take a computer graphics course before linear algebra. We learned just enough math to make things work but linear algebra really took a deep dive that I was able to appreciate having had the basics around affine transformations explained in a very visual and intriguing manner.


It's a major problem still: teachers neglect giving examples of practical application of a given concept (might be difficult at times, I admit), and miss the opportunity to make knowledge stick and increase the intuitive understanding of the subject.

I still remember my dialogue with my high school maths teacher when first learning calculus, it was more or less along the lines: "But why do we need these differentials?" "Because it will help you to understand integration." "But why do we need integration?" "Because it will help you in your further education." (No, she didn't even mention velocity/acceleration or area below the curve as it's usually done, I had to figure these out myself.)


Tons of free resources out there. OpenCNX is a solid resource for free online textbooks on algebra and calculus. A Programmers Introduction to Mathematics is also free.


I agree. I once worked with a Russian guy (it seems the communists were really good at teaching math) and he could often synthesize super elegant solutions because he knew how to express them in math. They were much shorter and more concise than my solutions which are often very much brute force.

On the other hand I have worked with physicists and mathematicians who were great at math but got nothing done in code. Applying math to coding seems a very special niche.


I don't think it has anything to do with communism, but I got similar observations. Don't know if that's still the case these days.

I suspect the reason is that in many countries, there is a greater focus on getting the fundamentals done with significant rigor. USA curriculums put more of an emphasis on breadth. I think they [the Russians] are right. For STEM-focused students, there should be no short-cuts when it comes to mathematics curriculum.


I can't find it at the moment --- it was an essay about a Russian mathematician's experiences coming to the United States and the culture shock with respect to education --- but in it he offered that Soviet academics' relationship to mathematical education was rooted in it being a safe avenue for self-expression and collaboration amongst all the censorship. Students had a drive to solve all the problems they could due to how much of a precious resource these groups were. (I believe the Math Circles program in the United States is inspired by these arrangements.)


PDF warning: https://faculty.utrgv.edu/eleftherios.gkioulekas/OGS/Misc/AR...

Good read.

I searched for "Russian mathematician's experiences coming to the United States and the culture shock with respect to education" ;)


This actually makes sense. The communists probably didn’t have much of an opinion about math so mathematicians could do whatever they wanted to do within that area. I read the same about rock music in East Germany. The party didn’t know how to deal with that so it left the fans and musicians alone.


Observation: USA curriculum, or, often in recent times, lack thereof, has not seemed to harm the USA’s position in the STEM field


Consider that the largest employer of mathematicians in the US is the state. Why is that? Why aren't market institutions valuing mathematicians more?

Also, if you look around and individuals who are staffing the top hierarchies of business/market institutions, what kind of credentials are most commonly seen there?

On top of that, in even intellectual subcultures, what stripes of intellect tend to be most respected in US culture?

I think there's a number of ways (some subtle, some less so) in which American society messages people about the status of STEM and in particular about the M, which is almost always about its auxiliary value for something else and rarely respected in its own right.

I think it's plausible there are a number of features of a communist culture that might shift some of those incentives, especially when combined with a generally more intellectual culture.

(Also, this shouldn't need to be said, but this is not an argument about the general superiority of communist society. I suspect that more pragmatic societies do better than more ideologically organized societies.)


I would agree that Math is useful in several programming situations. I won't say that I shouldn't have maybe tried a little harder to understand it. Here's the problem though, nothing's free. Concentrating more on Math, means you have to give something else up. Maybe Math would have helped you in a certain situation, but what other skill have you gained that are useful in others? It's easy to say, "Wouldn't it be nice, if I did this?" without having to specify the consequences of that action. (OK that sounds really negative, but I'm sure you know what I mean.)


I sometimes wish I had a better foundation in college level math when encountering proofs in algorithm books. This is probably the biggest hurdle to me just buckling down and studying the various complex algorithms.


In my experience, the kind of math you get in engineering classes, that is: this is how you calculate integrals, now write the result, it is not really useful for programming.

However, the kind of math you get in a mathematics degree, that is: we have this conjecture, now prove it, had made me a much better software developer than I was before taking these subjects.

Of course, a lot of time has passed and education has changed, but the principle stands: learn to write proofs, as it will help you expand your mind.


I hear things like this from time to time, but among programmers that I hang with almost no one ever uses any math beyond a basic primary school stuff. I really wonder is it:

- us doing boring projects (quite possible), or

- web dev and business apps are a specific niches that don't need anything more complicated than interest rate formula level of math, or

- it's just that people complaining about not knowing enough math really don't know even the basic math?


I studied enough math to comfortably read Knuth and solve real geometric and AI problems, which sometimes needs linalg. Which is ~2 years out of a 5 years curriculum. I'm still happy with that decision.

Everything which sounded too complicated eventually was too complicated. If it speaks like a duck, acts like a duck and looks like a duck, it eventually might be a duck.


My degree is in maths but I got special dispensation to drop statistics so that I could do more computing. Regretted that ever since.


I have said to people, a number of times, only half-joking, that the most complex maths I do during a typical day as a paid programmer, is adding 1 to a number.

I dropped maths in my last year of high school (halfway through calculus), not because I was failing it, but as a pragmatic choice, because it was my lowest grade, and because it was the subject that I enjoyed the least. Instead, I picked up advanced literature and history. I have never regretted this.

I then went straight on to Comp Sci at university, where I avoided all maths except for basic set theory and boolean algebra. After that, in the workforce as a dev (originally mainly PHP, now mainly Python).

I have never felt that my lack of maths background hindered me. On the other hand, I have very often felt that my communication skills (particularly my formal writing skills) have been above average for a dev, and have benefited me greatly.

I would like to learn more university-level maths, but I have neither a pressing urge nor a burning desire to do so. Programming, at least the kind that most devs do, on the whole has very little to do with maths. I don't regret not learning more maths in preparation for a career as a programmer, and I certainly don't regret learning lots of programming in preparation for a career as a programmer.


Honestly, 3/4 of what I like in programming languages are Mathy. Monoidal arithmetics (+) = 0, (+ 1 2 3 ...), Orthogonality~ , reasonable ..

Everytime I had to stray away from this my brain hurt.


So for experienced programmers, what is best (at least better books and resources to learn math that 'works'?


He has a point. I started wishing I knew more maths when I started playing with tensorflow last year....


If you focused on math, you'd be regretting wasting too much time on insignificant yet (intentionally) convoluted parts of math and feel you wasted a few years of your life. Just accept you can't know everything and that it's in your own best interest to work with other people to achieve your dreams and help them to achieve theirs.


Math things that you think are convoluted (at least as you’ll find in refined undergraduate/graduate material) usually have a very good reason for being that way in my experience...


Dunno, I spent (wasted) 3 years of my life studying theoretical computer science at PhD level including very advanced parts of (discrete) math and many of those parts are just unnecessarily obfuscated to enable academic careers and an academic version of "demonstrating one's worthiness". There are not many cases of a worse feeling than when you realize after a year and half of difficult studies that the problem was either simple and obfuscated, or that the set of instances satisfying some theory is empty.


Thanks for sharing. I have been contemplating this as well.

I do have a degree in CS, did take a decent amount of math in college (vector calculus, linear algebra, discrete math), and aced all my courses. More than 10 years out of college I still retain a lot of what I learned.

I work in AI and get paid well. But that said, I wish I had a graduate degree. And the most useful subject would be math I feel.

You see, like many others I didn't find that learning a lot of math would be useful. Yes, I was aware of its uses in 3d graphics and even learned about quaternions in college. But beyond that I was ignorant of how useful it would be for analyzing data. And I really tired of having to do proofs in college, so I didn't want to have anymore to do with something so difficult and unapplied.

Unfortunately I have found that you need the math background to work on what would be more interesting problems to me. And recently there are a lot more of these problems where math is important because of the AI wave. I am in my 30s and I'm thinking it is too late. But I am still considering going for an Applied Math Masters offered online by UW. See here for anyone who may be interested.

https://www.appliedmathonline.uw.edu/academic-experience/cou...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: