>Here was the heart of the crisis. Infinite sums of trigonometric functions had appeared before. Daniel Bernoulli (1700-1782) proposed such sums in 1753 as solutions to the problem of modeling the vibrating string. They had been dismissed by the. greatest mathematician of the time, Leonhard Euler (1707-1783). Perhaps Euler scented the danger they presented to his understanding of calculus. The committee that reviewed Fourier's manuscript: Pierre Simon Laplace (1749-1827), Joseph Louis Lagrange (1736-1813), Sylvestre Francois Lacroix (1765-1843), and Gaspard Monge (1746-1818), echoed Euler's dismissal in an unenthusiastic summary written by Simeon Denis Poisson (1781-1840). Lagrange was later to make his objections explicit.
>Well into the 1820s, Fourier series would remain suspect because they contradicted the established wisdom about the nature of functions. Fourier did more than suggest that the solution to the heat equation lay in his trigonometric series. He gave a simple and practical means of finding those coefficients, the ai, for any function. In so doing, he produced a vast array of verifiable solutions to specific problems. Bernoulli's proposition could be debated endlessly with little effect for it was
only theoretical. Fourier was modeling actual physical phenomena. His solution could not be rejected without forcing the question of why it seemed to work.
I picture this in my head as Fourier setting some shit on fire, hand calculating Fourier coefficients, then just pointing and yelling "SEE! SEE!" at Poisson and Lagrange.
: A Radical Approach to Real Analysis - David Bressoud
It wasn't until half a century later, after Cauchy, that mathematicians had a powerful and coherent foundation for calculus. It's true that then interesting ideas such as inginitesimals were rejected because they lacked comparable rigour: was Bressoud conflating these two time periods?
As to the foundations, what Fourier's work shattered was the sufficiency of then established notion of what a function is, which had been limited to what we now call analytic functions; this eventually lead to the abstract definition of a function that we have today.
> calculus was given a new name: Analysis
"Mathematical analysis of the infintesimal", peaked in the works of Euler, was just that: an attempt to mathematically analyze, among other things, the notion and applications of the infinitely small (and infinity in general); for some reason the words "mathematical" and "infinity" were subsequently dropped, and we are left just with a generic term "analysis" which now requires context to be understood properly.
This isn't quite what Formalism is, at least not as Hilbert -- the originator of that philosophy -- described it. In short, Formalism says that some mathematical sentences might not have an external meaning, and those are the ones that are no more than a game with symbols.
More precisely, Hilbert divided mathematical formulas to "real," those that do have external meaning, and "ideal", those that do not. The real formulas are usually finitary, while the ideal ones usually deal with infinities. Formalism is the view that mathematics is allowed to contain ideal sentences provided that they do not yield contradictions with real ones.
Quine disagrees with the logical positivists in a way that I find a little tricky to pin down, despite his and their writing being much clearer than "continental" philosophers, but I have found everything I've read from either camp very thought provoking.
One might argue the same thing about math, that at some level it's fundamentally a human creation. In fact, regardless of your position on this, I think it's maybe safe to argue that if one accepts the legitimacy of the question "why is mathematics so useful in representing the external world?" that person is implicitly accepting the idea that math is at some level a human -- i.e., internal -- construct, otherwise the question wouldn't make sense.
As such, someone might argue that the reason math is so good at representing external reality is because it's part of our representational system for external reality. That is, they're both the same: "external reality" is really "our understanding of external reality" which is in turn part of the same representational system as math.
... at least that's what I think the Quinian perspective would be? He probably wrote about this somewhere but something like that is my guess. I think a more useful discussion might be something like "why does math work at all in prediction?"
One interesting thing that arises from a Quinian take -- and is maybe implied by the essay in the discussion of areas where math doesn't predict well -- is that it's possible that actual reality deviates in significant ways from what is afforded by our current mathematics, that maybe there's some other representational system that would be better. "Mathematics" is sufficiently broad in scope that I think whatever it is would still be subsumed under that label (raising the tautological argument again) but at least the idea is there's possibly some way in which our current mathematical understanding is "off" in a very fundamental way, like at the level of fundamental logic or something.
The thought process began thinking all mathematical equations can be translated to English and vice versa. There must be a shared structure to language and math, or an isomorphism at the least. The effectiveness of mathematics then is more about its conciseness, not that it says anything new about the world we don't have with language.
One "natural" idea is that both our internal representations and external reality depend on spatio-temporal relations. Through sptatio-temporal relations a sound might hit our left ear before our right, or that we have a memory of the past and not the future. Noticing these relations are what are brain is good at, because it too is spatio-temporally laid out. We could then imagine from this base set of relations we construct language and math, to capture them.
But, and I hate to sound cliche, quantum mechanics might be showing relations beyond this picture of spacetime. Does math and language need to add some new relations to its repertoire to capture entanglement and wavefunction collapse? Or can it get there with its current structure? It kind of depends on what QM really is telling us. But how to get there... If math and language were built from only relations our brain and body could sense, maybe we are in danger of making empirically-refutable mathematics. How would that even look I have no idea. I have come across philosophy of mathematics essays saying math is our most global framework, able to accommodate any and all structure thus far, but still open the possibility of empirical refutation.
I don't want to get too "out there", but QM did spell the end for our classical picture of the world. Math gives us the precise statistics (e.g. wavefunctions), but does not tell us the causal story of QM. There are current-mathematics causal stories of QM (bohmian, etc), but they too disrupt our conception of spacetime and relativity. If the world is not spacetime limited, then doesn't that call into question our senses and language and math if they came about in the above method? I'm not entirely sure how the classical picture fits with math and language, but they seem connected to a large degree.
Indeed! Ultimately, this is expressed in mathematical notation - which is necessary if you want math to "just work" for you (almost automagically).
> If math and language were built from only relations our brain and body could sense
Which they, of course, are; hence the whole mystery of the "unreasonable effectiveness" of our "everyday" mathematics (such as linear algebra or complex analysis) in areas unreachable to our senses or to our ability even to imagine things (e.g., the quantum world).
I think the way some philosophers tend to phrase it is that we develop mathematics as a common language which we agree to use to communicate our theories, which we can attempt to verify via perception. Our choice of axioms etc is pretty free, but we tend to pick the ones which are both minimal and most useful for communication and making verifiable predictions.
I think that where the positivists and Quine differ is that the positivists say that once you have chosen axioms, you can make purely analytic statements, whose truth (within that system), only depends on reasoning according to those axioms, as opposed to statements whose truth does depend on our perceptions. Quine disagrees either that this is a useful distinction or that there is any difference at all, I'm not sure.