Exactly this. The fact that all of the mathematicians I've been around
to date (except for one, but he's a ham, so he doesn't really count in
this statistic since he already counts as a hacker :) have treated their
overall field more like a science than an art has really disenchanted me
from it. Yes, mathematics is regarded as the queen of all sciences, but
I don't really buy entirely into that. It has applications there, and
that's probably as far as it goes as far as science is concerned. (Disclaimer: I studied applied
mathematics at a small state university at the undergraduate level.)
Math is beautiful not because it is full of intricate logical machinery
and full of useful computational tools and full of pretty pictures;
rather, math is beautiful because the intricate logical machinery can
take different forms (how many different proofs of the FTIC are there?
Pythagoras' theorem?) and because it's an imaginary world inside one's
head where there are arbitrary -- even infinite -- dimensions and
The point of these meetings is to inform (and even pique the interest of) your colleagues. If they have
no idea what's going on and don't even understand the fundamental
notions, what's the point? You're wasting your time, keystrokes,
breath, and energy, not to mention the money provided to you by some
Yes, we share some parent way up the tree, but we can't go all the way back to that as a starting point because it'd simply take too long to progress from there back down to the interesting leaf node.
This vision makes me wonder when mathematics will reach a point where mastering the material required to understand a leaf node will take greater than the average life span.
I disagree. I observed that, until up to graduation, the situation is like a search graph. When I studied, I noticed many interdisciplinary connections. Fields like mechanic, electronic, logic, automatic, programming, often look like two sides of the same coin.
However I also noticed an inability from others to see the damn connections. If you present the same thing from another angle, they often fail to see it as the same thing, while I factor the obvious pattern like you would code (I've seen it in the case of 3D vision).
The consequences are quite catastrophic: the languages (jargon) used to describe each discipline diverge, making it even more difficult to notice the similarities. This convince even more people that there isn't any connection. That the situation is like a tree. At this point, they don't even bother to seek the connections, and we end up with a disconnected mess.
> This vision makes me wonder when mathematics will reach a point where mastering the material required to understand a leaf node will take greater than the average life span.
That can't happen: more time to study relevant material means less time to push further. It will take many geniuses to be able to learn and push fast enough. But this is pointless. If it takes you a lifetime to learn a particular field, that field will simply fall into oblivion. So, the geniuses (at least the one worth mentioning) won't push further. They will simplify their field, reducing the time required to learn it.
It is depressing, and I have nothing more to say about this.
In programming, the question of whether a C++ programmer can walk in and do Java is still open. People may claim it's not possible or they may give the person a chance. In mathematics, as far as I understand the current situation, there is almost never any debate. Once you are an algebraic geometer or whatever, you aren't going to walk into another subfield and start working except in very exceptional cases. The relative openness of programming, in fact, is why I switched from math to programming.
Unless you're dealing with business executive types (that wouldn't notice if you were lying anyway), most of the time, to most of the people, a Java programmer and a C programmer and a Ruby programmer are all really just programmers.
They may have different ways of dealing with problems (the C programmer will lie awake trying to remember if he dropped a free(), while the Ruby programmer will stay up nights trying to figure out how to stop copying that list O(n) times), but the point is that they're still dealing with the same problems (it's always memory, isn't it?). The techniques may be different, but the questions and basic concepts never change.
In math, this is far from the case. In geometry, you never have to deal with infinity the same way you do in set theory. In fact, you don't even have to understand the idea of infinity the same way as a set theorist, and because of this, you _can't_ become a set theorist (unless you want to go back to undergrad and disappoint your parents _again_). It's like saying that a Java programmer doesn't need to understand that memory exists. Maybe they don't have to directly allocate and free it, but they still need to know how much they have and what happens when they use too much of it, and they can certainly recognize the same issues in any other language, even if they don't know how to fix them.
Mathematicians just don't have the same amount of common ground, and it's not even that they can't (because there's too much or something), it's that they don't really even want to (because they see it as boring or a waste of time (and in the case of analysis, they'd be right ;-)).
Oh sure, computer science, and especially certain subfields like software engineering and languages, reinvent the wheel all the time. A new fad comes in and its proponents are too egotistical, or too lazy, to be bothered realizing it's the same as an old fad. This happens largely because the literature of computer science is exceptionally broad and not very deep. But this is hardly special to CS: it's common practice in engineering. More to the point, it doesn't have anything to do with what the author was on about.
What the author was talking about is the tendency, in mathematics, for the entire field to become balkanized into small groups with little interdisciplinary crosstalk and a disturbing degree of inbreeding. Many departments consist of specialists who cannot, or will not, talk to one another about their work. Sometimes no one understands what anyone else is doing or talking about. That's the claim anyway, and I have no reason to doubt it from what I've seen myself.
Computer science is nothing like this. Even computer science theory is highly interdisciplinary, application-oriented, and accessible to the mainstream CS audience (or at least to CS academics). In my department, I can talk to every single faculty member, reasonably intelligently, about what they are doing and how it is relevant to others. I can read their papers and more or less understand them. And I'm not some kind of uberprofessor, quite to the contrary. It's just that the field isn't very deep yet.
Edit: Come to think of it, literature may work. Nowadays you don't become an expert in lit, you become an expert in Shakespeare or Proust or Dostoevsky. Sometimes you may even be an expert in a single work. I think the problem is mitigated a little bit by the very low ratio of jargon to information. Math, on the other hand, has an entire new language to learn in each new subfield.
Much worse. I think the problem is magnified by our obsession with languages - overlapping subsets of syntax features that have highly intricate relationships with programming techniques (making certain techniques easier to implement, others-harder, regardless of problem domain).
There is also the language == word on your resume' issue. If something new and cool comes out, people will resist it because they will have to give up their "10 years of Foo experience" for "1 year of Bar experience". Sad but true. (I just say on my resume, "X years programming experience". Except I don't really know how to pick X, because I have been programming since I was 5 and writing useful programs since high school. Slightly different than going to a Java training class and showing up to work everyday for a few years...)
- dynamic object oriented (Smalltalk)
- homoiconic for compile-time metaprogramming (Scheme, Lisp)
- dynamic for runtime metaprogramming, like continuations (Scheme, Ruby, Smalltalk)
- functional (Scheme, Haskell)
- strongly typed (Haskell, Scala)
- logical (Prolog)
These are the ones I've come across, but there are more on wikipedia: http://en.wikipedia.org/wiki/Programming_paradigm
Still, I was more wondering about teaching methodology, not enumerating paradigms themselves. What could be done to counteract the "standing on the toes of giants" effect?
If I understand correctly, I'd imagine teaching everyone to implement languages using a system like Ian Piumarta's COLA might do the trick. The point is to break open these black boxes of abstraction (even though black boxes are good sometimes).
While I don't think the problem is quite as severe as Zeilberger claims, I do feel like conference talks should strive to be more accessible to specialists of other fields and to students. Particularly for a conference like the Joint Mathematics Meetings, it would be cool if speakers prepared their talks to be more like TED talks: just technical enough to make sure that the audience can understand the really interesting aspect of the research.
When I followed that advice my writing was clearer and better balanced. I also found that the narrative in every piece (even technically orientated stuff) became stronger.
I don't write essay's any more, but it's still the standard by which I measure every email and comment I write.
"Wrong! - You should be writing for the graduate student who will be picking up your work."
Any graduate student or even many professors will be on a similar level when exposed to something new. By explaining stuff at a relatively simple level, with enough intermediate steps to outline the method, most people can grasp how you did what you did.
Maybe not 17 year olds, but aimed at people well versed in mathematics, but not necessarily in that field.
I disagree the characterization that mathematicians don't try to broaden their horizons. Like scholars of any subject, mathematicians follow their interests, which often take them to places never expected to be. To me, that fits the bill for "broadening horizons". And if they don't seem to move far outside of their focus of study, well, I say it has to be this way because mathematical material is often so dense and precise that it takes a very long time to understand what's going on in any one subject.
See ref to "the last universalist" in Wikipedia: http://en.wikipedia.org/wiki/Henri_Poincare
The Wikipedia reference comes from a book called Men of Mathematics by Eric Temple Bell, I believe.
Older mathematicians probably already see the big picture -- well enough anyway.
Younger mathematicians need to worry about tenure and can't waste their time on big picture things that are not likely to pay off.
Better still, we need the mathematics community at large to become more accustomed to seeing papers published in journals that do this so that a larger contingent can be made aware that these papers/ideas/theorems/corollaries/lemmata/definitions/people/pictures/etc. exist.
Note also that this doesn't necessarily preclude anyone in mathematics from adopting a specialization. In my eyes, the younger mathematician is in the same place as the startup founder: in the position to take a big risk and do something that sounds like a stupid idea (e.g., avoid publishing results anywhere but freely and openly on the WWW; see also: Daniel Bernstein) in order to have it pay off (e.g., earn tenure).
Uh, mathematics departments decide what the criteria for tenure are. As the article suggests, perhaps mathematics departments could make knowledge of the field in general a criterion.
I think the point of the article is to point out this issue and provoke thought, because it certainly doesn't provide a solution.