I wrote a blog post a few years ago[1] making a similar claim about computer science. I said there’s 3 main camps of programmers:
- People who enjoy programming because it’s mathematically beautiful (eg Haskell programmers)
- People who enjoy programming because they like reasoning about machines, and like mechanical sympathy (eg C programmers)
- And people who like programming because it can solve real problems for their users.
I got pushback here and elsewhere that lots of people fit into multiple camps - which makes sense. Maybe they’re better described as an ecology of values. But I also think there’s something real here. I worked in a consulting shop a few years ago and my boss came back from some front end development conference gushing about a talk he’d seen that I had to watch. The talk was a layman’s description of FP’s immutability approach applied to react. My boss had never heard of immutability before because it had never come up in front end web development circles. There’s lots of opportunities for our software to improve by cross contaminating all our best ideas.
I like C-like programming languages, and I like to solve real world problems for my users. So two camps, I guess. Moreover I was always that annoying kid in class who would ask the teacher, “but what use is that?” Often the teacher couldn't even answer this very simple question, and every time that happened it left me really unmotivated to learn. The epiphany came in Upper Secondary school when a math teacher shushed the other pupils and actually took his time to tell me what {insert abstract math topic here} was actually good for. This changed a lot for me, but as far as grades go, it was already too late. Though at least it left me interested enough to research some things on my spare time, so when I finally came to the university, I almost couldn't believe my own eyes when I started to get good grades on (for me) pretty advanced math topics. I obviously still have a pretty big disadvantage in mathematics, but researching things that require stuff like abstract algebra and calculus is now much less of a hurdle for me.
Fantastic! Thanks for this. The FP vs non-FP split has always just felt like a less rigorous version of the "analysis vs algebra" or "problem-solving vs theoretical" discussion and I'm glad I'm not alone in thinking this.
This is also what rubs me the wrong way about the simplistic level with which FP languages (like Haskell or Scala+monads) are either cargo culted or hated. The truth is, the two perspectives really go hand-in-hand, and we're all the worse for not realizing this.
All notations (programming or Mathematical) are just different ways of expressing computations.
All notations have trade-offs at the level of semantics making certain operations more difficult to express (what programmers refer to as 'the expression problem')
That's the entire shpiel of reification/first-class citizenships/semantics of programming languages.
Instead of getting into fanboi-ism of 'my language is better than your language' perhaps approach the problem from first principles (design): what is "IT" that you want to talk about, reason, express and manipulate in your program?
That dictates the sort of language you need to solve the problem that you are trying to solve.
In my opinion, the FP/IP split is fueled in no small part by a wildly divergent set of vocabularies. A dozen different <greek letter> transforms are inscrutable to anyone who has not spent much time studying Lambda calculus, and the concept of a monad is famously difficult to explain.
You wind up with two camps that cannot communicate. It's often made worse by the functional programmers - who are mostly also imperative programmers - doing a poor job of enabling translation.
And people that do not care about any of those things and just want a paycheck, or to be promoted so that they do not have to code anymore (95% of developers worldwide).
Unfortunately the people that get excited building things is outnumbered by the people that is just in this for the money.
>Unfortunately the people that get excited building things is outnumbered by the people that is just in this for the money.
I don't think there's a reason to be sad about it. There are plenty of e.g. maintenance jobs in the industry, and people who are passionate about programming may not be the best fit for this niche - it's boring, and sometimes you feel the urge to re-invent a few wheels even in detriment to actual business needs. For not-so-passionate however it's a good place - minimize efforts while having good salary.
I'd love to see actual numbers on this. Its not 95% - but all of our intuitions will be a bit wrong because people cluster in workplaces based on personal values. So no matter what you're like, you'll end up working with people similar to yourself.
I wouldn't be surprised if people who just program for a paycheck are the majority of employees in many offices. I know they're a tiny minority in a lot of the places I've worked at. That's not a boast or a coincidence - humans cluster in tribes based on shared values. I don't consider myself part of the same community as people who show up to work to close jira tickets and collect a paycheck.
> I'm not sure it makes sense to describe people who love programming and people who just do it for a paycheck as truly being in the same field.
This is why I keep my mouth shut IRL about just wanting the paycheck. I'd imagine a lot more than a tiny minority of people are just there for the job where you've worked. We just keep our mouths shut because the passionate folks judge us paycheckers.
It's fine to optimize for the paycheck first and foremost.
But while you are at your job anyway, might as well make it slightly less of a drudgery while you are at it.
(Eg I quite like functional programming. Both because I like the style of thinking, but also because I don't trust myself nor anyone else with code; and eg a heavy emphasis on immutability means that I'm less likely to get woken up in the middle of the night with some production problem.)
I have a ton of coworkers who try to make their job more interesting by trying the most interesting tools instead of the best tools for the job. They have less drudgery, but more stress from not hitting deadlines as reliably. Meanwhile I'll embrace the drudgery and do the boring thing, stay less stressed because get it done more quickly and reliably because I chose the right tool, and got promoted really high.
Taking a mercenary attitude towards work works just fine for me, and keeps me overall happier.
Per your example, I like FP and immutability for the same reasons (won't get paged as much), and I'm always reading to stay up on my game, but it's still just work. If you worked with me, you'd probably think I live and breath this stuff as a passion because I put so much time in to stay so much on top of the tools available. But I hate programming. Hell, you might even say my hate of it drives mastery in the same way your passion drives your mastery. To me, it's just a tool by which to automate things I do care about (and retire younger than I could in other roles).
You don't have to go full on and re-write everything in Haskell. Small scale decisions can already make your life easier without much disruption. Eg if you are using Python, by default stick your information in frozen dataclasses, and only deviate from that with good reason.
Another thing I have a minor passion for is to as much as possible handle corner cases in the main line of the code.
Eg instead of having a big 'if' at the start of your function to handle an empty list and return early, try to make sure that the main logic can deal with empty lists just fine.
After all, the simpler your control flow, the easier it is to get code coverage in tests and production.
(I mention `and production', because in practice, lots of code is battle-hardened instead of sufficiently tested.)
Yes, you can make money pursuing your passion. But the point is that there are a majority of people that are not interested in programming and have no passion.
Their real passion is a having a band, or hiking, traveling, etc. And programming is just a way to fund those activities.
Others feel that programming is something that unimportant people do because they feel entitled to become managers. But why would a person like to work for a person they can't learn anything from? a person that looks down on their job? Noone wants a manager like that.
Those 3 categories fall into the broader category of "enthusiastic programmers" which aren't the majority from my experience. Most people want to leave the office knowing that they have made some progress with their assigned work, knowing that they haven't screwed anything generally and being able to sleep at night.
The thing about being "enthusiastic" is that certain cultures and organizations wouldn't at all recognise it and an otherwise enthusiastic programmer can easily become a drudge when the business requirements are just some company-specific internal rules changing all the time.
> [...] an otherwise enthusiastic programmer can easily become a drudge when the business requirements are just some company-specific internal rules changing all the time.
It's not even that: you can take company internal rule changes and make it 'fun'. Or at least as fun as anything else.
It's perhaps more the attitude of the organization. Both the wider organization and their software making department(s).
The trivial way to unify all camps is to view it from the lens of expressionism/self-expression.
Mathematicians want to express equational reasoning/identities.
Computer scientists want to express computations.
Translation/bridging the gap will be a whole lot easier when both camps answered the trivial 'Why?' question which contextualises the reason for doing whatever it is that you are doing.
People without shared goals have the tendency to speak right past each other.
One thing that I disliked, though, was the inaccurate caricaturing of people on camp 2. For example:
> Low level languages are often better than high level languages because you can be more explicit about what the computer will do when it executes your code. (Thus you have more room to optimize).
My concern is the last parenthesis. I am squarely on camp 2, but I'm almost never concerned for optimization. I want my code to be very explicit because I want to understand clearly what it does (thus I hate C++ and other untoward abstractions). Thus clarity is the main motivation, not efficiency as you put it.
Perhaps you are a detail-oriented person? I've met quite a lot of people who - like you it seems (although perhaps you disagree) - find things that are low-level and procedural to be clearest. Such people often like languages like C and Go where this style is encouraged.
On the other hand, I tend to think in terms of abstractions. And thus find things much clearer when they are based on high level abstractions (things like map and filter, RAII, pattern matching, etc). And I like languages like Rust and TypeScript where I can express those. If I read a for-loop, I have to translate it into a more abstract higher-level function in my head to understand what it's doing anyway.
i think it may be that. When I see some code like this in C++:
y = f(x);
I'm extremely terrified to what is going on. It is like looking into an abyss. Is this code calling a function named "f" with argument "x"? If so, there may be several different functions "f" depending on the type of "x". Maybe none of them has the same type as "x", but there are some conversions that may happen and lead to that. Or maybe "f" is not even a function, but it is an object that has overloaded the parenthesis operator? And then, what happens to the result, whatever type it has. Is it calling a copy constructor (over some inheritance chain)? Is the assignment operator overloaded? If running this line results in an execution error, I'm totally lost as to where to find the problem.
On the contrary, in C, the same line says explicitly what code is executed, without need for further context. If the code compiles, there is a single visible definition of a function named "f" and this is the code that is called.
I read much more code than I write. But code with lots of abstractions, like C++ with objects or templates, however elegant was to write, is a disgusting chore to analyze, understand and eventually fix (because the abstractions always fail). On the other hand, analyzing C code is pretty straightforward, even if it was a bit more verbose to write.
It sounds more like you have a problem with C++ and some of it's features, like function overloading and its move semantics, which I agree can be confusing and they always turned me away from C++, I just prefered C because it's straightforward. But then I tried Rust and I saw that it's not the abstractions that are confusing, it's C++.
> i think it may be that. When I see some code like this in C++
Indeed and I think this is the root of all the internet shouting on abstraction-heavy languages vs procedural/"simple" languages. A lot of folks that think in abstraction find it tedious and unsafe to deal with detail-oriented languages, while folks who are more detail-oriented find the abstraction disorienting and unsafe in their own way. Given that one group can't do away with another group, I hope we can all learn to work together.
"I got pushback here and elsewhere that lots of people fit into multiple camps - which makes sense."
People do fit into multiple camps... however, the camps definitely exist.
One of the several reasons I didn't go into academia, and one of the bigger ones overall, is that I could tell I wanted to split the difference between the "practicals" and the "mathematicians". And while this may not have been impossible, it was certainly at the very least a "high risk" move, because you end up with the support of neither camp, and there aren't enough people in the middle to make up for it.
I remember reading one of threads on your post here. The level of depth that the commenters went into regarding every possible permutation and exception to the broad classification were astounding.
For me, these are three different hats that I keep on my hatstand. I like them all. The dream is to wear all three at once and get paid for it. But, in practice, I'm always wearing hat 3 when I'm getting paid and, if I'm lucky, I can put on one of the other two for a while too. However, I have worked with many people who appear to never switch hats.
Interesting. I am definitely of the third kind since I'm a kid. But with the years I've come to appreciate mathematics (maybe FP was a gateway), and machines through wanting more real world connection, learning electronics and building physical things. Totally agree with your conclusion, no matter the path, all these aspects are important.
I belong to the camp that enjoys programing in a beautiful way and has experiemented languages that are capable of achieving mechanical sympathy without having to deal with C.
The camp 3 is called "professional programmers". I mean:
- sometimes a practical problem benefits greatly from a well-designed high-level abstraction,
- sometimes a practical problem strictly requires low-level optimization.
And yes, it is totally fine to have one's inclination in general, pursue them to any level in their free time, or in some (but not all) academic research. Yet, if a professional software engineer consistently favours one of these motivations over adding value to the product, usually there are frictions in the workplace.
I guess you all know "let's rewrite it to Haskell" people, or ones using their favourite paradigm for everything, and against everyone. Or ones that squeeze 20% performance in a place of code that does not matter at all for the end user experience.
We need to disabuse ourselves of the notion that "professional" implies any kind of quality, or that the people who program without being paid money are doing a bad job of it, for that matter.
This paper makes a ton of brilliant points all of which strongly resonate with me:
* The results that will last are the ones that can be organized coherently and explained economically to future generations (yes! effective compression!)
* How effectively a result can be communicated to another mathematician (and perhaps even s/mathematician/person/)
90% of my time spend 'studying mathematics' is spent lexing the notation. What do those symbols even mean? I can't even copy-paste this into Google to get any meaningful results!!!!!!!!!!!!@#!$@#$@#$@#$#@!$@#$
Have you noticed how we don't have this problem in Computer Science? Because the source code gives you the context in which to interpret the meaning of the grammar!
I do think there needs to be a better search system for latex/math symbols. That would be amazing. As far as using the notation, I forget where I read this but I remember seeing that one excuse for the use of abstract symbols is keep the ideas abstract so as to not narrow your mind into just what you're working on. So many areas of math cross over so keeping things abstract could aid in that recognition.
I remember in our intro to AI course our prof would sometime run out of English letters for symbols, then start introducing German ones, and eventually start using triangles and squares for stuff
Thanks. I didn’t know this. So it’s a recursive phenomenon and as culture continues developing we can expect quite a mess of a fractal. That would explain the noticeable decline in ability of holistic judgement and the rise of kompetencelessness.
Just to also a couple existing comments here, a problem solver is not necessarily an applied person or an engineer. It's someone who looks for interesting unsolved questions, and invents what needs inventing to get a solution.
By contrast, an engineer uses well established techniques (with creativity!) to build bridges that they know can be built. Diving into unsolved mathematical problems - especially the ones that have picked up a reputation - is inherently a riskier endeavor.
As the text says: 'the interesting problems tend
to be open precisely because the established techniques cannot easily be applied.'
Actually the late Prof Leo Breiman expanded this metaphor into the statistical modelling world. The two cultures being the explanatory and predictive modelling folk.
The explanatory modelling culture (that he calls the data modelling culture) being those that first come up with a guess of how the data is generated and then try to test that hypothesis using goodness of fit measures, and the predictive modelling culture (he calls algorithmic modelling culture) being the modern machine learning researchers that are purely interested in predictive power.
Only tangentially related, but a friend of mine had to interrupt his doctoral thesis in mathematics because the set he was studying turned out to be empty. This gave a lot of ammunition to our Russian colleagues, who routinely make fun of French mathematicians for being far too ensconced in theory.
I feel like I could sit down and rewrite this, in a couple of hours, to be about Computer Science, but the practical field and the educational as well.
I mean, I could feasibly include this sentence as a quote, depending on which aspect of CS I was talking about (academic CS and language design in particular):
‘It is that the subjects that appeal to theory-builders are, at the moment, much more fashionable than the ones that appeal to problem-solvers.’
It is an interesting article and I will look forward to any discussion about it, especially the correlation to CS and development.
From my experience, among academic computer science professors, the attitude is that computer science is about the "fundamentals" of computing.
E.g., once I was talking with such a computer science professor and listing features I wanted in a better programming language, and immediately his reaction was that for a professor developing such a language would be "academic suicide".
My university had an amazing C++ programming course. Essentially all STEM majors could take credits from the course. CS did not allow credits from this course because "It is just programming" it teaches nothing fundamental.
Meanwhile, this course was more difficult that most other courses, taught me to code much better, which helped most other courses, and taught me basic principles that I easily generalized and re-applied in other CS courses. I think that a lot of people going to do Academic work would have been better of for having taken this course. Even if they never were to touch a C++ like language again.
The fact that, when talking about inheritance and sub-typing in C++, the terms "variance" or "co-variance" never came up, does not mean that this course taught me little on type-theory.
That matches my experience as well. It also illustrates the divide in educational tracks, for lack of a better term, that I probably should have expanded on in my original post.
In addition to individual developers and development teams have the potential to be either the problem-solving or the theory building types, the path a person take to programming can go either of those directions. It is most apparent in the difference between a BS in CS from somewhere like Carnegie-Mellon (a bastion of high level theoretical CS education) and any code boot camp or online web-dev centric training course.
There are several ways to interpret that person's reaction.
1. It would be academic suicide to develop a new programming language.
2. Developing a new programming language is fine, but developing one designed to be better for practitioners is academic suicide.
3. Developing a new programming language for whatever reason is fine, but the particular ideas you were suggesting were of such a nature (perhaps "generally acknowledged as bad" or "not an improvement over accepted practice") as to be academic suicide.
(1) seems unlikely at best, there's a whole section on arxiv for programming languages (https://arxiv.org/list/cs.PL/recent), and just within the last decade we have languages like Julia and Elm coming out of academia.
(2) also seems unlikely, all the examples of recent academic-derived programming languages I can find are designed to make somebody's experience better. (And who would bother designing a programming language if they weren't at least hoping to improve something?)
Without knowing further details, I won't comment any further on (3).
Your points are well taken. But the more likely explanation for the professor's reaction was just that doing anything in programming language design would be "academic suicide".
The date of his remark was about 1974, and I was suggesting improvements in PL/I that I'd been using for about 4 years.
I believe his view was that academic research for programming languages in practice was over with -- e.g., LISP, APL, Algol 68, and PL/I were all implemented by 1974.
So, your examples show that he was wrong: Long after 1974 others in academics did work in programming languages without academic death via suicide or otherwise.
Sooo, in 1974, views of the academic research content of programming language design varied -- such variations are with us frequently, i.e., make horse races.
But that encounter in 1974 was just one example. I had another one from a computer science professor in the 1990s.
Also, with some irony, in the 1980s and 1990s, I was in the group at IBM's Watson lab that designed and implemented the artificial intelligence rule-based language YES/L1. We published lots of papers in academic conferences. Moreover, beyond just design, I was our lead on joint work with GM Research and was one of our two presenters of our paper at the AAAI (American Association for Artificial Intelligence) IAAI (Innovative Applications of Artificial Intelligence) at Stanford; so the work, design, implementation, and application of YES/L1 was regarded as worth of publication in academics. Indeed, MIT published a book with paper from the conference, and our paper was one of the ones published.
So, with my work, I, too, showed that the comments of those two professors I mentioned were not correct.
Still, it remains, with my two examples from professors, there long has been an attitude and belief that work in programming languages is not sufficiently "fundamental" to be in computer science and would be "academic suicide". That has been a common attitude; now both of us have shown that the attitude was not correct, but it remains that that attitude has long existed -- that was my claim, that there was such an attitude. I didn't claim that the attitude was universal or correct.
Given that the attitude has been in academic computer
I gave the prof's remark of "academic
suicide" as an example of a significant
attitude in computer science. I didn't
claim that the remark was correct, and
your examples show that the remark was not
correct.
But the more likely explanation for the
professor's reaction was not your (1) --
(3) but just that prof's strong belief
that doing anything in programming
language design would be "academic
suicide". Again, I didn't claim that the
prof's attitude was correct, but it WAS
his attitude.
The date of his remark was about 1974, and
I was suggesting improvements in PL/I (or
similar languages) that I'd been using for
about 4 years, for US Navy sonar work,
scheduling the fleet at FedEx, etc.
I would guess that the prof's view was
that academic research for programming
languages in practice was over with --
e.g., LISP, APL, Algol 68, and PL/I were
all implemented by 1974.
So, your examples and more show that the
prof was wrong: Long after 1974 others in
academic computer science did work in
programming languages without academic
death via suicide or otherwise.
Sooo, in 1974, views of the academic
research content of programming language
design varied -- such differences of
opinion are common, i.e., make horse
races.
But that encounter in 1974 was just one
example. In the 1990s I had another one
from a prof in a top level computer
science department. Gee, at the time the
President of that prof's university was
one of my Ph.D. dissertation advisors, and
I could have warned the prof that he was
trying to paddle upstream against the
relatively practical orientation of his
university president!
Also, with some irony, in the 1980s and
1990s, I was in the group at IBM's Watson
lab that designed and implemented the
artificial intelligence rule-based
language YES/L1. We published several
papers in computer science conferences.
Moreover, beyond just language design, I
was our lead on our joint work with GM
Research, the person writing our paper,
and one of our two presenters of our paper
at the AAAI (American Association for
Artificial Intelligence) IAAI (Innovative
Applications of Artificial Intelligence)
at Stanford; so the work, design,
implementation, and application of YES/L1,
was regarded as worthy of publication in
academics. Indeed, MIT published a book
with papers from the conference, and our
paper was one of the ones published.
So, with my work, now both of us have
shown that the comments of those two
professors I mentioned were not correct.
Still, it remains, with my two examples
from professors, there long has been an
attitude and belief that work in
programming languages is not sufficiently
"fundamental" to be in computer science
and would be "academic suicide". That has
been a common attitude; now both of us
have shown that the attitude was not
correct, but it remains that that attitude
has long existed -- that was my claim,
that there was such an attitude. I didn't
claim that the attitude was universal or
correct.
Given that the attitude has been in
academic computer science, no doubt the
work -- both research and teaching -- of
academic computer science departments has
been affected. In particular, students
who are eager to get high proficiency in
using programming languages, and, maybe,
writing compilers, can find, sadly, that
their department is not much interested.
I have a third example of academic
computer science demeaning programming
languages: When I was prof in a Big 10
B-school and MBA program, I'd quickly
upset the apple cart of the campus CIO
(soon I served on a committee to pick
another CIO) and been named the Chair of
the Computer Committee of the B-school.
And I'd given a grad course in computer
selection and management. We were
considering what our B-school might do in
research and teaching in computing and had
a site visit of computer science profs
from some other Big 10 universities.
At one of our meetings with the site visit
committee, one of the visitors asked me
what programming I'd done and in what
languages. So, I mentioned some of my 15
years or so of work with a variety of
applications with a variety of common
languages. Right away his reaction was
that from that experience with programming
my "brain was ruined for computer
science". Again, I'm not saying he was
correct! So here is my third example of
some academic computer science hostility
to programming languages!
This hostility seems to be a special case
of a larger, common pattern in parts of
academics: Some graduate students and
non-tenured faculty live under continual
yellow rain from leaks from higher up
latrines.
The situation is harmful for all
productive purposes for all concerned.
E.g., as a grad student, I got the best in
our class on four of the five Ph.D.
qualifying exams and in the fifth topic
already, from independent work, had a
first manuscript of my Ph.D. dissertation,
but the yellow rain continued. It wasn't
just me: Junior faculty were falling like
soldiers at Gettysburg.
By accident I found a solution, a water
tight hazmat suit and a stainless steel
umbrella: A course in optimization had
gotten fairly deep into the Kuhn-Tucker
conditions. I saw a question, saw no
answers in the likely literature, and
asked for a "reading course" to address
the question. I had a rough idea how to
proceed, and the course was approved. I
looked around for existing tools for a
solution, found none, so just pursued what
I had in mind. Two weeks later I had a
nice solution, written up, submitted, and
accepted. So, I was done with the
"reading course" in two weeks. Part of my
solution was a surprising, curious theorem
-- with the set of real numbers R, a
positive integer n, R^n with the usual
topology, and a set C a subset of R^n, set
C is closed if and only if it is the level
set of an infinitely differentiable
function f: R^n --> R. My work in
constraint qualifications also answered a
question stated but not answered in the
famous paper in mathematical economics by
Arrow, Hurwicz, and Uzawa. Yes, I found
it easy to publish the paper.
When I submitted my work at the end of the
two weeks, word spread quickly in the
department. The prof I regarded as the
best gave me congrats in the hall. Result
-- the hazmat suit and umbrella. For the
rest of my time through my Ph.D., no more
yellow rain.
My Ph.D. was in applied math and not
computer science.
It appears that it there is a wide range
of topics in computer science and for most
of the topics there are profs who regard
the topic as good and profs who regard the
topic as junk.
The critical stuff of the yellow rain can
cause stress, clinical depression, and
suicide -- that's what happened to my
wife, sweet old-fashioned girl, won prizes
in cooking, sewing, raising chickens,
Valedictorian, Summa Cum Laude, PBK,
Woodrow Wilson and NSF Fellowships, and
Ph.D.
I do not now, nor have ever had, any
interest in being a college prof or
publishing papers. I regard such a career
as financially irresponsible. My
interests are in business, the
money-making kind, and there I regard
math, existing and/or original, and
computing as the main tools. I went to
grad school to learn more about the math
tools. I was a college prof for a while
as part of taking care of my suffering
wife.
Unfortunately the article quickly descends into a long defence of the merits of Combinatorics, rather than a description of the two cultures.
Since combinatorics are so ubiquitious today in computer science, algorithms in particular, it would be more interesting for me to better understand the "algebraic number theory and differential geometry" world thst the author continously refers to.
I guess I should just read the Michael Atiyah interview that this article sometimes feel like a direct reply to.
It's a real phenomenon that you see at elite departments. I got into an argument here in the comment section with a mathematician on whether graph theory was a "core" area of mathematics. I don't even like graph theory or combinatorics.
Sylvia Nasar's take on John Forbes Nash, Jr.'s type:
"He was a mathematician who viewed mathematics not as a grand scheme, but as a collection of challenging problems. In the taxonomy of mathematicians, there are problem solvers and theoreticians, and, by temperament, Nash belonged to the first group. He was not a game theorist, analyst, algebraist, geometer, topologist, or mathematical physicist. But he zeroed in on areas in these fields where essentially nobody had achieved anything. The thing was to find an interesting question that he could say something about."
Most often listed is the number '2'.
The highest numbers together make '7'
and if you defer 'the multiplication one position to the right', you have '2 x 3' -making '6'.
It appears that roughly the point of the
OP (original post) is that in math there
are two cultures (1) people who want to
develop new fields of math with
definitions, theorems, and proofs and (2)
people who want to use math, all or nearly
all old, to solve problems usually from
outside math.
Apparently the OP is from the UK (United
Kingdom). Here I attempt to provide a
view and explanation of those two cultures
from and for the US.
Math got taken very seriously due heavily
to WWII and The Bomb. There The Bomb was
seen as heavily from Einstein's E = mc^2.
So, suddenly the US (especially Congress)
concluded that for US national security
the US had to lead in science and math.
Over the years after WWII, various events,
e.g., Sputnik, reinforced this conclusion.
One result was that the US NSF had funds
for math research for culture (1) in the
US research universities. The emphasis
was on what was really new; that is, if
there was to be another E = mc^2 result
from math and science, then the US wanted
to be the first to discover that result.
But the NSF was not much interested in
funding math in culture (2).
Then there was some irony: For US
national security, the NSF was funding
math in culture (1) while also for US
national security, especially the Cold War
and the Space Race, the US DoD and NASA
were heavily funding applications of math
via culture (2). In those years, there
were good culture (2) careers, especially
near DC, for people with comparatively
good backgrounds in math and computing.
But in the profit seeking, practical,
commercial US, away from the motivation of
the Cold War and the Space Race, math from
either culture was ignored, laughed at, or
in rare but overwhelming cases terribly
feared.
The there was and is help for culture (2):
The US research universities also commonly
have schools of engineering, and there are
journals eager to publish applications of
math.
E.g., the usual criteria for publication
are "new, correct, and significant", and
an application of some math that is a new
and correct solution for a significant
practical problem can be seen to satisfy
these criteria and qualify for
publication.
So, net, currently, an application of math
-- maybe all or nearly all old math --
that is powerful and valuable in practice
-- i.e., some secret sauce -- can count
on little or no competition.
You seem to imply that the two cultures are pure vs applied but that’s not really right. Something like Sturm-Liouville theory is generally very applied and useful but falls into culture 1. The sort of mathematics that Erdős did was pure but fell more into culture 2.
You might be correct in part or in whole. We'd have to discuss back and forth! My (1) and (2) will have to be at best only rough, and to save space I omitted the real examples I have in mind to formulate those two. Writing clearly, briefly, and precisely about big subjects is not always easy!
The time I encountered Strum-Louville theory was a course from the fairly well known Hildebrand, Advanced Calculus for Applications from MIT and by a professor with a recent MIT Ph.D. By the way, that book is available in PDF on the Internet for free. To me the topic looked like the math of vibrating strings, etc., but the course did not go deeply into Sturm-Louville theory or anything else. So, from that course I have regarded Sturm-Louville theory now as in (2); but if there is some continuing, deep work there, maybe it belongs in (1). Maybe I touched on that subject once more; talking to M. Athans at MIT, he explained that the optimal control problem I was considering was a "two point boundary value problem with mixed end conditions". So, maybe there are connections with control theory.
For Erdős, I have just regarded him as all in (1), but he seemed to have gotten problems from anywhere, including from outside math so might be in (2). Sooo, for the problems outside math, I meant relatively practical problems, ones where solutions would a lot of value, financial or scientific, for fields outside of math. I've always regarded Erdős as working a lot on interesting, tricky, puzzle problems. But I don't know Erdős's work at all well.
Maybe I can summarize: Back when I had one foot in each of (1) and (2), my definition of applied math was like a recipe for rabbit stew -- "First catch a rabbit." So, first get an application, i.e., from outside math, where an application solves a problem that in some sense is pressing and practical.
Examples: At FedEx, how to schedule the fleet? How to project revenue for years in advance? How best to climb, cruise, and descend an airplane? How to plan an airplane tour under uncertainty. Back in military work, how to use the Navier-Stokes equations to design ship propellers? How to process the data to do beam forming from arrays of passive sonar? How to evaluate the survivability of the US SSBN submarines? How to target missiles? How best to search for a submarine? In a simulation, how to generate ocean waves with a given power spectral density? How to respond to ill-conditioned matrices in regression problems? How to do least squares spline fitting, including for multidimensional data? For each of these problems, there were people who cared a lot about getting the solutions -- these were applications. Puzzle problems seem to be applications of a different kind.
Maybe now there are some good applications to be made analysis of DNA.
> But the NSF was not much interested in funding math in culture (2).
I'm not sure that this is supported by the historical record.
For example, from NSF's own page:
https://www.nsf.gov/about/history/overview-50.jsp
> The first NSF grants are awarded to support computation centers and research in numerical analysis. Three years later, a separate budget is created for grants to enable academic institutions to acquire major computer equipment.
Going by those, I don't see how that would not be (2)?
A little later we find UMN's "Institute for Mathematics and its Applications" (https://www.ima.umn.edu/about/history, sounds very (2)-ish) which was established with NSF funding in 1982. And a bit after that, in the 90s, I am fairly confident that some portion of my graduate work in applying mathematics to biological problems was NSF-supported.
The Kelley, General Topology pure math book acknowledges NSF and Office of Naval Research support in years 1950-1952. Same financial supporters for Luenberger, Optimization by Vector Space Methods -- fun with the Hahn-Banach theorem.
Good to hear that the NSF has supported some computing hardware -- my guess was that that was done mostly by the US Department of Energy. E.g., as I recall LINPACK was developed at Oak Ridge.
At one time I was teaching computer science at Georgetown U., and some colleagues wanted to work in speech recognition, asked for NSF support, and were told that the NSF did not support "software development" or some such.
Most of numerical analysis I saw was good definition, theorem, proof math. But recently I saw that the Formula 1 auto racing teams have been using CFD (computational fluid dynamics) for detailed design of the shape, wings, downforce, etc. of their cars. That's a lot of progress in fluid flow since I was working with the Navier-Stokes equations. So, apparently numerical methods for fluid flow have made a LOT of progress. As I recall, at times there was some such work at Courant Institute. I hope the NSF supported some of that progress.
For NSF support of "applying mathematics to biological problems" very good to hear, but my guess would be that that came under NSF support of biology instead of applied math.
Here is more what I had in mind: Daily the many US research-teaching hospitals take in sick people and make them well. There is good research, but the actual patient care is clinical, that is, serving people, and professional, e.g., with apprenticeship training, code of ethics, and liability and for professional practice. In the US I don't see anything similar for applied math or computer science, with or without NSF support.
Generally US research universities have a huge fraction of their annual operating budgets from grants from the US Federal Government, especially NSF and NIH. Roughly professors apply for grants, and about 60% goes to overhead for the university and, thus, also supports the English department, the string quartet series, the drama company, the alumni magazine, etc. Since the research universities don't much like math applications outside of academics, that NSF/NIH support is not much for such math applications.
E.g., university math departments have lots of talks by professors presenting their solutions without known non-academic problems but nearly no talks from non-academic people with problems looking for solutions. So, such academic math departments with NSF grants are not using that NSF funding to help non-academic people with problems find solutions.
E.g., in grad school in an applied math department, I studied a lot in optimization and stochastic processes, but I was the only one there with any real non-academic problems that could use work in optimization or stochastic processes.
When I was a B-school prof, I was shocked to discover that B-schools were fine with some quite pure math research but wanted nothing to do with education or practice as in the law school, medical school, school of pharmacy, or agricultural college.
> The Kelley, General Topology pure math book acknowledges NSF and Office of Naval Research support in years 1950-1952. Same financial supporters for Luenberger, Optimization by Vector Space Methods -- fun with the Hahn-Banach theorem.
So what if they did? Your original claim was "NSF funds (1), and does not fund (2)". Symbolically, we can represent this as "A and (not B)", where A is "NSF funds (1)" and B is "NSF funds (2)". To refute that, we need to show the negation, i.e. "(not A) or B" (This is, of course, just a De Morgan's law)
If I were going for "not A", then Kelley would destroy that line of argument, but in fact I was going for "B", and neither Kelley nor Luenberger bear on that.
> Good to hear that the NSF has supported some computing hardware -- my guess was that that was done mostly by the US Department of Energy. E.g., as I recall LINPACK was developed at Oak Ridge.
> At one time I was teaching computer science at Georgetown U., and some colleagues wanted to work in speech recognition, asked for NSF support, and were told that the NSF did not support "software development" or some such.
Sounds to me like they wrote the grant for the wrong thing, or spun it incorrectly. NSF turns down grants all the time (more than they fund, in fact), and if they didn't make it clear that there was actual research being conducted, then of course they'd get turned down.
"""Much of the initial research, performed with NSF funding, was conducted in the 1980s. This research led to further product development from Dragon, AT&T, IBM and other companies. """
(And presumably your colleagues would have had a reasonable expectation of having had a shot at NSF money, otherwise they wouldn't have wasted their time)
> Most of numerical analysis I saw was good definition, theorem, proof math. But recently I saw that the Formula 1 auto racing teams have been using CFD (computational fluid dynamics) for detailed design of the shape, wings, downforce, etc. of their cars. That's a lot of progress in fluid flow since I was working with the Navier-Stokes equations. So, apparently numerical methods for fluid flow have made a LOT of progress. As I recall, at times there was some such work at Courant Institute. I hope the NSF supported some of that progress.
See, this claim, to me, is bizarre, as a sizable portion of HPC use has historically been doing CFD and of course the NSF was heavily involved.
> For NSF support of "applying mathematics to biological problems" very good to hear, but my guess would be that that came under NSF support of biology instead of applied math.
I went and checked, and the grant was via DMS, which stands for "Division of Mathematical Sciences" and not, as one might have thought, "DNA, Metabolism, and Snakes".
> Here is more what I had in mind: Daily the many US research-teaching hospitals take in sick people and make them well. There is good research, but the actual patient care is clinical, that is, serving people, and professional, e.g., with apprenticeship training, code of ethics, and liability and for professional practice. In the US I don't see anything similar for applied math or computer science, with or without NSF support.
This is an entirely different kettle of fish and also irrelevant.
>Since the research universities don't much like math applications outside of academics
> E.g., university math departments have lots of talks by professors presenting their solutions without known non-academic problems but nearly no talks from non-academic people with problems looking for solutions.
Non-academics usually use back channels and personal communication instead of formal presentations. Maybe in a better world it'd be different.
> When I was a B-school prof, I was shocked to discover that B-schools were fine with some quite pure math research but wanted nothing to do with education or practice as in the law school, medical school, school of pharmacy, or agricultural college.
I used to teach math to business school students, and in the interest of decorum I will say no further on the matter.
Uh, let's see: When I was buying math and physics books, it seemed that a lot of them credited the NSF with funding. I gave two examples, Kelley and Luenberger, but there were more.
Soooo, there was some NSF pure math funding in the 1950s, to respond to a small point. The funding seemed to be generous since the joke was "While you are up get me a grant.".
And from my non-representative sample way back there, it looked like the NSF loved "The analytic-algebraic topology of the locally Euclidean metrization of infinitely differentiable Riemannian manifolds."
I should have been more clear -- LINPACK is software and IIRC was written at Oak Ridge. So, by then some Federal organization
was funding software. So, the Georgetown pair, in about 1973, that wanted funding for speech recognition and was told by the NSF that they didn't fund software was, shall we say, early and later the NSF changed their mind.
1973 was a long time ago. So, with your impressive data, the NSF has evolved!
Oh, by the way, the grad and ugrad B-school students I taught did okay: They liked that I got them into matrix theory easily, used duality to prove the two person game theory saddle point result, and made min cost capacitated network flows easy to understand.
To me, the distinction maps onto the Effectual vs Causal distinction. For some path "A to B", some people prefer to consider their present tools/resources (point A) and work forward opportunistically. While others prefer to consider the end-goal (point B) and work backwards recursively. If the analogy is unclear: a theory is a tool; a specific problem is an end-goal.
There is a similar kind of division of labor in physics between theorists and experimentalists. The cultural difference is closer to that between mathemeticians and engineers, but of course both groups are entirely dependent on each other and most successful collaborations have both. The days of solo publishing are mostly over.
It sounds like the distinction is between pure and applied mathematics, though the writer says that the battle is within pure mathematics.
I've also read about two other categorizations of mathematicians: active and passive. The active are out trying to prove new theorems while passive mathematicians are trying to collect and generalize past theorems. Finding generalizations also requires new theorems and could be argued to be an active task, but this sounded like a distinction between researcher and educator, and has stuck with me.
- People who enjoy programming because it’s mathematically beautiful (eg Haskell programmers)
- People who enjoy programming because they like reasoning about machines, and like mechanical sympathy (eg C programmers)
- And people who like programming because it can solve real problems for their users.
I got pushback here and elsewhere that lots of people fit into multiple camps - which makes sense. Maybe they’re better described as an ecology of values. But I also think there’s something real here. I worked in a consulting shop a few years ago and my boss came back from some front end development conference gushing about a talk he’d seen that I had to watch. The talk was a layman’s description of FP’s immutability approach applied to react. My boss had never heard of immutability before because it had never come up in front end web development circles. There’s lots of opportunities for our software to improve by cross contaminating all our best ideas.
[1] https://josephg.com/blog/3-tribes/