- People who enjoy programming because it’s mathematically beautiful (eg Haskell programmers)
- People who enjoy programming because they like reasoning about machines, and like mechanical sympathy (eg C programmers)
- And people who like programming because it can solve real problems for their users.
I got pushback here and elsewhere that lots of people fit into multiple camps - which makes sense. Maybe they’re better described as an ecology of values. But I also think there’s something real here. I worked in a consulting shop a few years ago and my boss came back from some front end development conference gushing about a talk he’d seen that I had to watch. The talk was a layman’s description of FP’s immutability approach applied to react. My boss had never heard of immutability before because it had never come up in front end web development circles. There’s lots of opportunities for our software to improve by cross contaminating all our best ideas.
This is also what rubs me the wrong way about the simplistic level with which FP languages (like Haskell or Scala+monads) are either cargo culted or hated. The truth is, the two perspectives really go hand-in-hand, and we're all the worse for not realizing this.
All notations have trade-offs at the level of semantics making certain operations more difficult to express (what programmers refer to as 'the expression problem')
That's the entire shpiel of reification/first-class citizenships/semantics of programming languages.
Instead of getting into fanboi-ism of 'my language is better than your language' perhaps approach the problem from first principles (design): what is "IT" that you want to talk about, reason, express and manipulate in your program?
That dictates the sort of language you need to solve the problem that you are trying to solve.
You wind up with two camps that cannot communicate. It's often made worse by the functional programmers - who are mostly also imperative programmers - doing a poor job of enabling translation.
The thing about being "enthusiastic" is that certain cultures and organizations wouldn't at all recognise it and an otherwise enthusiastic programmer can easily become a drudge when the business requirements are just some company-specific internal rules changing all the time.
It's not even that: you can take company internal rule changes and make it 'fun'. Or at least as fun as anything else.
It's perhaps more the attitude of the organization. Both the wider organization and their software making department(s).
Unfortunately the people that get excited building things is outnumbered by the people that is just in this for the money.
I don't think there's a reason to be sad about it. There are plenty of e.g. maintenance jobs in the industry, and people who are passionate about programming may not be the best fit for this niche - it's boring, and sometimes you feel the urge to re-invent a few wheels even in detriment to actual business needs. For not-so-passionate however it's a good place - minimize efforts while having good salary.
I wouldn't be surprised if people who just program for a paycheck are the majority of employees in many offices. I know they're a tiny minority in a lot of the places I've worked at. That's not a boast or a coincidence - humans cluster in tribes based on shared values. I don't consider myself part of the same community as people who show up to work to close jira tickets and collect a paycheck.
This is why I keep my mouth shut IRL about just wanting the paycheck. I'd imagine a lot more than a tiny minority of people are just there for the job where you've worked. We just keep our mouths shut because the passionate folks judge us paycheckers.
But while you are at your job anyway, might as well make it slightly less of a drudgery while you are at it.
(Eg I quite like functional programming. Both because I like the style of thinking, but also because I don't trust myself nor anyone else with code; and eg a heavy emphasis on immutability means that I'm less likely to get woken up in the middle of the night with some production problem.)
Taking a mercenary attitude towards work works just fine for me, and keeps me overall happier.
Per your example, I like FP and immutability for the same reasons (won't get paged as much), and I'm always reading to stay up on my game, but it's still just work. If you worked with me, you'd probably think I live and breath this stuff as a passion because I put so much time in to stay so much on top of the tools available. But I hate programming. Hell, you might even say my hate of it drives mastery in the same way your passion drives your mastery. To me, it's just a tool by which to automate things I do care about (and retire younger than I could in other roles).
About FP and immutability specifically:
You don't have to go full on and re-write everything in Haskell. Small scale decisions can already make your life easier without much disruption. Eg if you are using Python, by default stick your information in frozen dataclasses, and only deviate from that with good reason.
Another thing I have a minor passion for is to as much as possible handle corner cases in the main line of the code.
Eg instead of having a big 'if' at the start of your function to handle an empty list and return early, try to make sure that the main logic can deal with empty lists just fine.
After all, the simpler your control flow, the easier it is to get code coverage in tests and production.
(I mention `and production', because in practice, lots of code is battle-hardened instead of sufficiently tested.)
you were passionate, found a job that claimed to match your passion and later found out otherwise. rinse and repeat multiple times, and there you go.
soon you started to be paycheck only at your day job and passionate off hours in your free time.
Their real passion is a having a band, or hiking, traveling, etc. And programming is just a way to fund those activities.
Others feel that programming is something that unimportant people do because they feel entitled to become managers. But why would a person like to work for a person they can't learn anything from? a person that looks down on their job? Noone wants a manager like that.
Mathematicians want to express equational reasoning/identities.
Computer scientists want to express computations.
Translation/bridging the gap will be a whole lot easier when both camps answered the trivial 'Why?' question which contextualises the reason for doing whatever it is that you are doing.
People without shared goals have the tendency to speak right past each other.
I thoroughly enjoyed your post at the time!
One thing that I disliked, though, was the inaccurate caricaturing of people on camp 2. For example:
> Low level languages are often better than high level languages because you can be more explicit about what the computer will do when it executes your code. (Thus you have more room to optimize).
My concern is the last parenthesis. I am squarely on camp 2, but I'm almost never concerned for optimization. I want my code to be very explicit because I want to understand clearly what it does (thus I hate C++ and other untoward abstractions). Thus clarity is the main motivation, not efficiency as you put it.
On the other hand, I tend to think in terms of abstractions. And thus find things much clearer when they are based on high level abstractions (things like map and filter, RAII, pattern matching, etc). And I like languages like Rust and TypeScript where I can express those. If I read a for-loop, I have to translate it into a more abstract higher-level function in my head to understand what it's doing anyway.
y = f(x);
On the contrary, in C, the same line says explicitly what code is executed, without need for further context. If the code compiles, there is a single visible definition of a function named "f" and this is the code that is called.
I read much more code than I write. But code with lots of abstractions, like C++ with objects or templates, however elegant was to write, is a disgusting chore to analyze, understand and eventually fix (because the abstractions always fail). On the other hand, analyzing C code is pretty straightforward, even if it was a bit more verbose to write.
Indeed and I think this is the root of all the internet shouting on abstraction-heavy languages vs procedural/"simple" languages. A lot of folks that think in abstraction find it tedious and unsafe to deal with detail-oriented languages, while folks who are more detail-oriented find the abstraction disorienting and unsafe in their own way. Given that one group can't do away with another group, I hope we can all learn to work together.
People do fit into multiple camps... however, the camps definitely exist.
One of the several reasons I didn't go into academia, and one of the bigger ones overall, is that I could tell I wanted to split the difference between the "practicals" and the "mathematicians". And while this may not have been impossible, it was certainly at the very least a "high risk" move, because you end up with the support of neither camp, and there aren't enough people in the middle to make up for it.
- sometimes a practical problem benefits greatly from a well-designed high-level abstraction,
- sometimes a practical problem strictly requires low-level optimization.
And yes, it is totally fine to have one's inclination in general, pursue them to any level in their free time, or in some (but not all) academic research. Yet, if a professional software engineer consistently favours one of these motivations over adding value to the product, usually there are frictions in the workplace.
I guess you all know "let's rewrite it to Haskell" people, or ones using their favourite paradigm for everything, and against everyone. Or ones that squeeze 20% performance in a place of code that does not matter at all for the end user experience.
We need to disabuse ourselves of the notion that "professional" implies any kind of quality, or that the people who program without being paid money are doing a bad job of it, for that matter.
The two cultures of mathematics and biology - https://news.ycombinator.com/item?id=8819811 - Dec 2014 (69 comments)
The Two Cultures of Mathematics (2000) [pdf] - https://news.ycombinator.com/item?id=7970284 - July 2014 (28 comments)
"Problem Solvers" vs "Theory Developers": The Two Cultures of Mathematics. [pdf] - https://news.ycombinator.com/item?id=682913 - July 2009 (5 comments)
* The results that will last are the ones that can be organized coherently and explained economically to future generations (yes! effective compression!)
* How effectively a result can be communicated to another mathematician (and perhaps even s/mathematician/person/)
90% of my time spend 'studying mathematics' is spent lexing the notation. What do those symbols even mean? I can't even copy-paste this into Google to get any meaningful results!!!!!!!!!!!!@#!$@#$@#$@#$#@!$@#$
Have you noticed how we don't have this problem in Computer Science? Because the source code gives you the context in which to interpret the meaning of the grammar!
Homoiconicity is the panacea of formal languages.
By contrast, an engineer uses well established techniques (with creativity!) to build bridges that they know can be built. Diving into unsolved mathematical problems - especially the ones that have picked up a reputation - is inherently a riskier endeavor.
As the text says: 'the interesting problems tend
to be open precisely because the established techniques cannot easily be applied.'
The explanatory modelling culture (that he calls the data modelling culture) being those that first come up with a guess of how the data is generated and then try to test that hypothesis using goodness of fit measures, and the predictive modelling culture (he calls algorithmic modelling culture) being the modern machine learning researchers that are purely interested in predictive power.
The full essay is here: https://projecteuclid.org/journals/statistical-science/volum...
I mean, I could feasibly include this sentence as a quote, depending on which aspect of CS I was talking about (academic CS and language design in particular):
‘It is that the subjects that appeal to theory-builders are, at the moment, much more fashionable than the ones that appeal to problem-solvers.’
It is an interesting article and I will look forward to any discussion about it, especially the correlation to CS and development.
E.g., once I was talking with such a computer science professor and listing features I wanted in a better programming language, and immediately his reaction was that for a professor developing such a language would be "academic suicide".
Meanwhile, this course was more difficult that most other courses, taught me to code much better, which helped most other courses, and taught me basic principles that I easily generalized and re-applied in other CS courses. I think that a lot of people going to do Academic work would have been better of for having taken this course. Even if they never were to touch a C++ like language again.
The fact that, when talking about inheritance and sub-typing in C++, the terms "variance" or "co-variance" never came up, does not mean that this course taught me little on type-theory.
In addition to individual developers and development teams have the potential to be either the problem-solving or the theory building types, the path a person take to programming can go either of those directions. It is most apparent in the difference between a BS in CS from somewhere like Carnegie-Mellon (a bastion of high level theoretical CS education) and any code boot camp or online web-dev centric training course.
1. It would be academic suicide to develop a new programming language.
2. Developing a new programming language is fine, but developing one designed to be better for practitioners is academic suicide.
3. Developing a new programming language for whatever reason is fine, but the particular ideas you were suggesting were of such a nature (perhaps "generally acknowledged as bad" or "not an improvement over accepted practice") as to be academic suicide.
(1) seems unlikely at best, there's a whole section on arxiv for programming languages (https://arxiv.org/list/cs.PL/recent), and just within the last decade we have languages like Julia and Elm coming out of academia.
(2) also seems unlikely, all the examples of recent academic-derived programming languages I can find are designed to make somebody's experience better. (And who would bother designing a programming language if they weren't at least hoping to improve something?)
Without knowing further details, I won't comment any further on (3).
The date of his remark was about 1974, and I was suggesting improvements in PL/I that I'd been using for about 4 years.
I believe his view was that academic research for programming languages in practice was over with -- e.g., LISP, APL, Algol 68, and PL/I were all implemented by 1974.
So, your examples show that he was wrong: Long after 1974 others in academics did work in programming languages without academic death via suicide or otherwise.
Sooo, in 1974, views of the academic research content of programming language design varied -- such variations are with us frequently, i.e., make horse races.
But that encounter in 1974 was just one example. I had another one from a computer science professor in the 1990s.
Also, with some irony, in the 1980s and 1990s, I was in the group at IBM's Watson lab that designed and implemented the artificial intelligence rule-based language YES/L1. We published lots of papers in academic conferences. Moreover, beyond just design, I was our lead on joint work with GM Research and was one of our two presenters of our paper at the AAAI (American Association for Artificial Intelligence) IAAI (Innovative Applications of Artificial Intelligence) at Stanford; so the work, design, implementation, and application of YES/L1 was regarded as worth of publication in academics. Indeed, MIT published a book with paper from the conference, and our paper was one of the ones published.
So, with my work, I, too, showed that the comments of those two professors I mentioned were not correct.
Still, it remains, with my two examples from professors, there long has been an attitude and belief that work in programming languages is not sufficiently "fundamental" to be in computer science and would be "academic suicide". That has been a common attitude; now both of us have shown that the attitude was not correct, but it remains that that attitude has long existed -- that was my claim, that there was such an attitude. I didn't claim that the attitude was universal or correct.
Given that the attitude has been in academic computer
But the more likely explanation for the
professor's reaction was not your (1) --
(3) but just that prof's strong belief
that doing anything in programming
language design would be "academic
suicide". Again, I didn't claim that the
prof's attitude was correct, but it WAS
The date of his remark was about 1974, and
I was suggesting improvements in PL/I (or
similar languages) that I'd been using for
about 4 years, for US Navy sonar work,
scheduling the fleet at FedEx, etc.
I would guess that the prof's view was
that academic research for programming
languages in practice was over with --
e.g., LISP, APL, Algol 68, and PL/I were
all implemented by 1974.
So, your examples and more show that the
prof was wrong: Long after 1974 others in
academic computer science did work in
programming languages without academic
death via suicide or otherwise.
Sooo, in 1974, views of the academic
research content of programming language
design varied -- such differences of
opinion are common, i.e., make horse
But that encounter in 1974 was just one
example. In the 1990s I had another one
from a prof in a top level computer
science department. Gee, at the time the
President of that prof's university was
one of my Ph.D. dissertation advisors, and
I could have warned the prof that he was
trying to paddle upstream against the
relatively practical orientation of his
Also, with some irony, in the 1980s and
1990s, I was in the group at IBM's Watson
lab that designed and implemented the
artificial intelligence rule-based
language YES/L1. We published several
papers in computer science conferences.
Moreover, beyond just language design, I
was our lead on our joint work with GM
Research, the person writing our paper,
and one of our two presenters of our paper
at the AAAI (American Association for
Artificial Intelligence) IAAI (Innovative
Applications of Artificial Intelligence)
at Stanford; so the work, design,
implementation, and application of YES/L1,
was regarded as worthy of publication in
academics. Indeed, MIT published a book
with papers from the conference, and our
paper was one of the ones published.
So, with my work, now both of us have
shown that the comments of those two
professors I mentioned were not correct.
Still, it remains, with my two examples
from professors, there long has been an
attitude and belief that work in
programming languages is not sufficiently
"fundamental" to be in computer science
and would be "academic suicide". That has
been a common attitude; now both of us
have shown that the attitude was not
correct, but it remains that that attitude
has long existed -- that was my claim,
that there was such an attitude. I didn't
claim that the attitude was universal or
Given that the attitude has been in
academic computer science, no doubt the
work -- both research and teaching -- of
academic computer science departments has
been affected. In particular, students
who are eager to get high proficiency in
using programming languages, and, maybe,
writing compilers, can find, sadly, that
their department is not much interested.
I have a third example of academic
computer science demeaning programming
languages: When I was prof in a Big 10
B-school and MBA program, I'd quickly
upset the apple cart of the campus CIO
(soon I served on a committee to pick
another CIO) and been named the Chair of
the Computer Committee of the B-school.
And I'd given a grad course in computer
selection and management. We were
considering what our B-school might do in
research and teaching in computing and had
a site visit of computer science profs
from some other Big 10 universities.
At one of our meetings with the site visit
committee, one of the visitors asked me
what programming I'd done and in what
languages. So, I mentioned some of my 15
years or so of work with a variety of
applications with a variety of common
languages. Right away his reaction was
that from that experience with programming
my "brain was ruined for computer
science". Again, I'm not saying he was
correct! So here is my third example of
some academic computer science hostility
to programming languages!
This hostility seems to be a special case
of a larger, common pattern in parts of
academics: Some graduate students and
non-tenured faculty live under continual
yellow rain from leaks from higher up
The situation is harmful for all
productive purposes for all concerned.
E.g., as a grad student, I got the best in
our class on four of the five Ph.D.
qualifying exams and in the fifth topic
already, from independent work, had a
first manuscript of my Ph.D. dissertation,
but the yellow rain continued. It wasn't
just me: Junior faculty were falling like
soldiers at Gettysburg.
By accident I found a solution, a water
tight hazmat suit and a stainless steel
umbrella: A course in optimization had
gotten fairly deep into the Kuhn-Tucker
conditions. I saw a question, saw no
answers in the likely literature, and
asked for a "reading course" to address
the question. I had a rough idea how to
proceed, and the course was approved. I
looked around for existing tools for a
solution, found none, so just pursued what
I had in mind. Two weeks later I had a
nice solution, written up, submitted, and
accepted. So, I was done with the
"reading course" in two weeks. Part of my
solution was a surprising, curious theorem
-- with the set of real numbers R, a
positive integer n, R^n with the usual
topology, and a set C a subset of R^n, set
C is closed if and only if it is the level
set of an infinitely differentiable
function f: R^n --> R. My work in
constraint qualifications also answered a
question stated but not answered in the
famous paper in mathematical economics by
Arrow, Hurwicz, and Uzawa. Yes, I found
it easy to publish the paper.
When I submitted my work at the end of the
two weeks, word spread quickly in the
department. The prof I regarded as the
best gave me congrats in the hall. Result
-- the hazmat suit and umbrella. For the
rest of my time through my Ph.D., no more
My Ph.D. was in applied math and not
It appears that it there is a wide range
of topics in computer science and for most
of the topics there are profs who regard
the topic as good and profs who regard the
topic as junk.
The critical stuff of the yellow rain can
cause stress, clinical depression, and
suicide -- that's what happened to my
wife, sweet old-fashioned girl, won prizes
in cooking, sewing, raising chickens,
Valedictorian, Summa Cum Laude, PBK,
Woodrow Wilson and NSF Fellowships, and
I do not now, nor have ever had, any
interest in being a college prof or
publishing papers. I regard such a career
as financially irresponsible. My
interests are in business, the
money-making kind, and there I regard
math, existing and/or original, and
computing as the main tools. I went to
grad school to learn more about the math
tools. I was a college prof for a while
as part of taking care of my suffering
Since combinatorics are so ubiquitious today in computer science, algorithms in particular, it would be more interesting for me to better understand the "algebraic number theory and differential geometry" world thst the author continously refers to.
I guess I should just read the Michael Atiyah interview that this article sometimes feel like a direct reply to.
"He was a mathematician who viewed mathematics not as a grand scheme, but as a collection of challenging problems. In the taxonomy of mathematicians, there are problem solvers and theoreticians, and, by temperament, Nash belonged to the first group. He was not a game theorist, analyst, algebraist, geometer, topologist, or mathematical physicist. But he zeroed in on areas in these fields where essentially nobody had achieved anything. The thing was to find an interesting question that he could say something about."
'12 x 23 = 276'
Most often listed is the number '2'.
The highest numbers together make '7'
and if you defer 'the multiplication one position to the right', you have '2 x 3' -making '6'.
Apparently the OP is from the UK (United
Kingdom). Here I attempt to provide a
view and explanation of those two cultures
from and for the US.
Math got taken very seriously due heavily
to WWII and The Bomb. There The Bomb was
seen as heavily from Einstein's E = mc^2.
So, suddenly the US (especially Congress)
concluded that for US national security
the US had to lead in science and math.
Over the years after WWII, various events,
e.g., Sputnik, reinforced this conclusion.
One result was that the US NSF had funds
for math research for culture (1) in the
US research universities. The emphasis
was on what was really new; that is, if
there was to be another E = mc^2 result
from math and science, then the US wanted
to be the first to discover that result.
But the NSF was not much interested in
funding math in culture (2).
Then there was some irony: For US
national security, the NSF was funding
math in culture (1) while also for US
national security, especially the Cold War
and the Space Race, the US DoD and NASA
were heavily funding applications of math
via culture (2). In those years, there
were good culture (2) careers, especially
near DC, for people with comparatively
good backgrounds in math and computing.
But in the profit seeking, practical,
commercial US, away from the motivation of
the Cold War and the Space Race, math from
either culture was ignored, laughed at, or
in rare but overwhelming cases terribly
The there was and is help for culture (2):
The US research universities also commonly
have schools of engineering, and there are
journals eager to publish applications of
E.g., the usual criteria for publication
are "new, correct, and significant", and
an application of some math that is a new
and correct solution for a significant
practical problem can be seen to satisfy
these criteria and qualify for
So, net, currently, an application of math
-- maybe all or nearly all old math --
that is powerful and valuable in practice
-- i.e., some secret sauce -- can count
on little or no competition.
The time I encountered Strum-Louville theory was a course from the fairly well known Hildebrand, Advanced Calculus for Applications from MIT and by a professor with a recent MIT Ph.D. By the way, that book is available in PDF on the Internet for free. To me the topic looked like the math of vibrating strings, etc., but the course did not go deeply into Sturm-Louville theory or anything else. So, from that course I have regarded Sturm-Louville theory now as in (2); but if there is some continuing, deep work there, maybe it belongs in (1). Maybe I touched on that subject once more; talking to M. Athans at MIT, he explained that the optimal control problem I was considering was a "two point boundary value problem with mixed end conditions". So, maybe there are connections with control theory.
For Erdős, I have just regarded him as all in (1), but he seemed to have gotten problems from anywhere, including from outside math so might be in (2). Sooo, for the problems outside math, I meant relatively practical problems, ones where solutions would a lot of value, financial or scientific, for fields outside of math. I've always regarded Erdős as working a lot on interesting, tricky, puzzle problems. But I don't know Erdős's work at all well.
Maybe I can summarize: Back when I had one foot in each of (1) and (2), my definition of applied math was like a recipe for rabbit stew -- "First catch a rabbit." So, first get an application, i.e., from outside math, where an application solves a problem that in some sense is pressing and practical.
Examples: At FedEx, how to schedule the fleet? How to project revenue for years in advance? How best to climb, cruise, and descend an airplane? How to plan an airplane tour under uncertainty. Back in military work, how to use the Navier-Stokes equations to design ship propellers? How to process the data to do beam forming from arrays of passive sonar? How to evaluate the survivability of the US SSBN submarines? How to target missiles? How best to search for a submarine? In a simulation, how to generate ocean waves with a given power spectral density? How to respond to ill-conditioned matrices in regression problems? How to do least squares spline fitting, including for multidimensional data? For each of these problems, there were people who cared a lot about getting the solutions -- these were applications. Puzzle problems seem to be applications of a different kind.
Maybe now there are some good applications to be made analysis of DNA.
I'm not sure that this is supported by the historical record.
For example, from NSF's own page:
> The first NSF grants are awarded to support computation centers and research in numerical analysis. Three years later, a separate budget is created for grants to enable academic institutions to acquire major computer equipment.
That seems like it's very firmly in your (2).
Going further, I found the distribution of NSF grants for 1952. Mathematics got one grant, to Lamberto Cesari, for work on "Asymtotic Behavior and Stability Problems". Cesari has a wiki page with links to some of his articles, for example : https://projecteuclid.org/journals/bulletin-of-the-american-... , and a bio at St Andrews: https://mathshistory.st-andrews.ac.uk/Biographies/Cesari/
Going by those, I don't see how that would not be (2)?
A little later we find UMN's "Institute for Mathematics and its Applications" (https://www.ima.umn.edu/about/history, sounds very (2)-ish) which was established with NSF funding in 1982. And a bit after that, in the 90s, I am fairly confident that some portion of my graduate work in applying mathematics to biological problems was NSF-supported.
Good to hear that the NSF has supported some computing hardware -- my guess was that that was done mostly by the US Department of Energy. E.g., as I recall LINPACK was developed at Oak Ridge.
At one time I was teaching computer science at Georgetown U., and some colleagues wanted to work in speech recognition, asked for NSF support, and were told that the NSF did not support "software development" or some such.
Most of numerical analysis I saw was good definition, theorem, proof math. But recently I saw that the Formula 1 auto racing teams have been using CFD (computational fluid dynamics) for detailed design of the shape, wings, downforce, etc. of their cars. That's a lot of progress in fluid flow since I was working with the Navier-Stokes equations. So, apparently numerical methods for fluid flow have made a LOT of progress. As I recall, at times there was some such work at Courant Institute. I hope the NSF supported some of that progress.
For NSF support of "applying mathematics to biological problems" very good to hear, but my guess would be that that came under NSF support of biology instead of applied math.
Here is more what I had in mind: Daily the many US research-teaching hospitals take in sick people and make them well. There is good research, but the actual patient care is clinical, that is, serving people, and professional, e.g., with apprenticeship training, code of ethics, and liability and for professional practice. In the US I don't see anything similar for applied math or computer science, with or without NSF support.
Generally US research universities have a huge fraction of their annual operating budgets from grants from the US Federal Government, especially NSF and NIH. Roughly professors apply for grants, and about 60% goes to overhead for the university and, thus, also supports the English department, the string quartet series, the drama company, the alumni magazine, etc. Since the research universities don't much like math applications outside of academics, that NSF/NIH support is not much for such math applications.
E.g., university math departments have lots of talks by professors presenting their solutions without known non-academic problems but nearly no talks from non-academic people with problems looking for solutions. So, such academic math departments with NSF grants are not using that NSF funding to help non-academic people with problems find solutions.
E.g., in grad school in an applied math department, I studied a lot in optimization and stochastic processes, but I was the only one there with any real non-academic problems that could use work in optimization or stochastic processes.
When I was a B-school prof, I was shocked to discover that B-schools were fine with some quite pure math research but wanted nothing to do with education or practice as in the law school, medical school, school of pharmacy, or agricultural college.
So what if they did? Your original claim was "NSF funds (1), and does not fund (2)". Symbolically, we can represent this as "A and (not B)", where A is "NSF funds (1)" and B is "NSF funds (2)". To refute that, we need to show the negation, i.e. "(not A) or B" (This is, of course, just a De Morgan's law)
If I were going for "not A", then Kelley would destroy that line of argument, but in fact I was going for "B", and neither Kelley nor Luenberger bear on that.
> Good to hear that the NSF has supported some computing hardware -- my guess was that that was done mostly by the US Department of Energy. E.g., as I recall LINPACK was developed at Oak Ridge.
"In 1985, NSF began funding the creation of five new supercomputing centers"
> At one time I was teaching computer science at Georgetown U., and some colleagues wanted to work in speech recognition, asked for NSF support, and were told that the NSF did not support "software development" or some such.
Sounds to me like they wrote the grant for the wrong thing, or spun it incorrectly. NSF turns down grants all the time (more than they fund, in fact), and if they didn't make it clear that there was actual research being conducted, then of course they'd get turned down.
In fact, when it comes to speech recognition, NSF says they funded a lot of it back when:
"""Much of the initial research, performed with NSF funding, was conducted in the 1980s. This research led to further product development from Dragon, AT&T, IBM and other companies. """
(And presumably your colleagues would have had a reasonable expectation of having had a shot at NSF money, otherwise they wouldn't have wasted their time)
> Most of numerical analysis I saw was good definition, theorem, proof math. But recently I saw that the Formula 1 auto racing teams have been using CFD (computational fluid dynamics) for detailed design of the shape, wings, downforce, etc. of their cars. That's a lot of progress in fluid flow since I was working with the Navier-Stokes equations. So, apparently numerical methods for fluid flow have made a LOT of progress. As I recall, at times there was some such work at Courant Institute. I hope the NSF supported some of that progress.
See, this claim, to me, is bizarre, as a sizable portion of HPC use has historically been doing CFD and of course the NSF was heavily involved.
For example : https://nsf.gov/pubs/1996/nsf9646/nsf9646.pdf and https://www.nsf.gov/od/lpa/news/03/tip031113_topsupercompute...
> For NSF support of "applying mathematics to biological problems" very good to hear, but my guess would be that that came under NSF support of biology instead of applied math.
I went and checked, and the grant was via DMS, which stands for "Division of Mathematical Sciences" and not, as one might have thought, "DNA, Metabolism, and Snakes".
> Here is more what I had in mind: Daily the many US research-teaching hospitals take in sick people and make them well. There is good research, but the actual patient care is clinical, that is, serving people, and professional, e.g., with apprenticeship training, code of ethics, and liability and for professional practice. In the US I don't see anything similar for applied math or computer science, with or without NSF support.
This is an entirely different kettle of fish and also irrelevant.
>Since the research universities don't much like math applications outside of academics
.... What? https://www.pacm.princeton.edu/about ??? https://amath.washington.edu/history ???
> E.g., university math departments have lots of talks by professors presenting their solutions without known non-academic problems but nearly no talks from non-academic people with problems looking for solutions.
Non-academics usually use back channels and personal communication instead of formal presentations. Maybe in a better world it'd be different.
> When I was a B-school prof, I was shocked to discover that B-schools were fine with some quite pure math research but wanted nothing to do with education or practice as in the law school, medical school, school of pharmacy, or agricultural college.
I used to teach math to business school students, and in the interest of decorum I will say no further on the matter.
Soooo, there was some NSF pure math funding in the 1950s, to respond to a small point. The funding seemed to be generous since the joke was "While you are up get me a grant.".
And from my non-representative sample way back there, it looked like the NSF loved "The analytic-algebraic topology of the locally Euclidean metrization of infinitely differentiable Riemannian manifolds."
I should have been more clear -- LINPACK is software and IIRC was written at Oak Ridge. So, by then some Federal organization
was funding software. So, the Georgetown pair, in about 1973, that wanted funding for speech recognition and was told by the NSF that they didn't fund software was, shall we say, early and later the NSF changed their mind.
1973 was a long time ago. So, with your impressive data, the NSF has evolved!
Oh, by the way, the grad and ugrad B-school students I taught did okay: They liked that I got them into matrix theory easily, used duality to prove the two person game theory saddle point result, and made min cost capacitated network flows easy to understand.
I've also read about two other categorizations of mathematicians: active and passive. The active are out trying to prove new theorems while passive mathematicians are trying to collect and generalize past theorems. Finding generalizations also requires new theorems and could be argued to be an active task, but this sounded like a distinction between researcher and educator, and has stuck with me.