call me a mathematical hipster (new frameworks etc.) but i could never get into axiomatic set theory. algebra is already boring by virtue of being abstract but then make it even more abstract by dropping all models! at least in algebra you have applications like galois groups, lie algebras, crypto, etc. yes i know that complexity theory is basically in the domain of set theory (complexity hierarchy) but even that was so abstract. so does anyone know any really cool applications of this stuff (just to be clear i don't mean technology i mean concrete theorems).
I am not sure if this is what you are looking for, but the Relational Model is built on a mathematical foundation of Set Theory[1] and first order predicate calculus[2]. This foundation is used to define a logically closed algebra known as Relational Algebra[3]. The implications of this closed logical foundation is, in my opinion, the real power of the Relational Model and is in no small part why RDBMS's exploded in popularity.
In the context of a RDBMS, SQL is a domain language for expressing the underlying Relational Algebra:
Set theory exists because something was needed to put analysis on a foundation that mathematicians were satisfied with (of course, some didn’t feel a pressing need for it, and some are still unsatisfied with the foundation), especially after Fourier threw a wrench into ideas about the possible definitions of a “function”.
So the “application” for set theory you want is point-set topology, and the “application” for point-set topology is analysis, or further down the road all sorts of modeling used in physics and engineering and so on.
i'm very familiar with all of this (i cut my teeth on baby rudin and i have dipped into papa rudin over the years). what i'm interested in is not the value (i understand the value of set theory like i understand the value of ZFC and Peano) but the "beauty". note i "know" point-set topology (at the level of munkres) and even a little cohomology so i'm looking for something a little more sophisticated than just hausdorff spaces or something like that.
edit: but that is a nice paper you linked to. thank you.
Given your background I'm surprised you'd think you might be called a hipster for not being a fan of axiomatic set theory :). Most math people I know learn just the naive set theory they need to move on to analysis and algebra (and later topology and algebraic geometry).
|\N| = |\Z| = |\Q^2457000| != |\R| isn't wondrous to you? Nor are space-filling curves? Or unmeasurable sets and paradoxes like Banach-Tarski? You don't see beauty in the proofs of Cantor's theorem (|S| < |P(S)| for all S) or the Bernstein-Schroeder theorem (injections X -> Y and Y -> X imply |X| = |Y|)? What about the plethora of mathematical claims that are equivalent to the axiom of choice -- some of them very sought-after properties, some of them unnerving and seemingly broken? Does the status of the continuum hypothesis (and the mathematical legitimacy of, to say nothing of the resolution to, a bunch of oddball "infinitary combinatorics" problems) interest you?
What about where set theory interplays with model theory? Would it surprise you to hear that the dizzying towers of infinities that can be spoken of in ZF can be modeled by a merely countable universe? What about a computer program (albeit an impractical one) that decides whether sentences about the natural numbers with successor are true or false, whose existence is discovered by proving that large uncountable models are isomorphic?
To be clear, it's okay to be bored by this stuff. I tend to get bored pretty quickly when people start talking about systems of DEs or eigenvectors, or really much of anything that looks too much like calculus, and I'm not about to apologize for my apathy for those things. But most of the above stuff tends catch the attention of even the humanities types, who are easily deterred by most mathematical topics.
Out of curiosity, I might ask whether you're similarly bored by computability theory or complexity theory, which leverage many of the same tools as those used in set theory (mappings as preorders, diagonal arguments, etc), but add computable enumerability/uniformity/syntacticity requirements to the mix.
yes Bernstein-schroeder wasn't interesting except in its "universality". Cantor diagonalization yes was very formative for me in my mathematical maturation and Godel too but no the complexity towers - recursively enumerable and nondenumerable don't interest me (or whatever the class that NP belongs to is called - I've forgotten most of the recursion theory I learned now). so yes computability theory was very boring to me. in the parlance I'm a geometer rather than an algebraist (so analysis, manifolds, relativity, etc.)
Personally (as someone who did several pure math courses as an undergraduate but is not by any means a pure mathematician) I think Cantor’s set theory and everything that flows from it is a bunch of hokum.
But if mathematicians’ convention is that they will refer to ZFC when writing their proofs about something else I am interested in, I am willing to humor them, and keep my caveats to myself.
I’m reasonably well convinced that most models mathematicians can make in the context of ZFC which have any real-world implications (e.g. accurately simulate some physical phenomenon) could be alternately proved under some more restrictive set of axioms, just sometimes with a lot more hassle.
In my opinion it is entirely speculative, built on largely unconvincing foundations (involving a lot of hand waving and circular reasoning when you start poking at them) whose implications are often absurd.
But it doesn’t really matter what I think. This is not a fight I care about. It’s like arguing with Buddhists about the nature of suffering or something.
Can a statement like Goodstein's theorem (which is about finite objects) be proven by finitist mathematics? It cannot be proven by Peano arithmetic, which is bi-interpretable with ZFC with the axiom of infinity replaced by its negation [1], or any weaker theory.
Could you expand on this a little? Which reasoning do you find to be circular and which implications to you find to be absurd? If you explain what you an issue with others may be able to help you understand it more clearly.
Formal systems (like formal axiomatic set theory) are more in the domain of logic than mathematics (although there are things like model theory that lie at the intersection of the two). From their very early conception, they were designed to serve two roles: 1. Serve as completely precise languages, where no ambiguity is possible, and 2. allow for mechanical reasoning. As role 1 is often in service of role 2, I think it's fair to generalize and say that the role of formal systems is mechanical reasoning.
If mechanical reasoning is not something that interests you, you have little use for these pedantic systems. Mechanical reasoning has two current uses: 1. mechanical verification of mathematical proofs and assistance in such proofs -- this application is very rarely used in practice, and is commonly the domain of a few enthusiasts who hold high hopes for improvements that would make it more mainstream. 2. Mechanical verification of software, including static analysis, model checkers, type systems and other software-relevant proof systems.
Axiomatic set theory isn't just for "pedantry" and mechanical reasoning, though that's an important application. It's also like a telescope array that allows us to see ever further into Cantor's paradise and the universe (multiverse?) of mathematics. The upper levels of transfinite set theory are especially breathtaking and somewhat scary [1]. Hugh Woodin spoke a little bit about this in a presentation on the Ultimate L conjecture:
"There's the other possibility: set theory is completely irrelevant to physics... That's almost more interesting because if this conjecture is true, we've argued there's truth here for set theory, and it's a truth that's not about the physical universe. To me that's almost more remarkable, that there could be some conception of truth that transcends truth that we see around us."
The presentation is called The Continuum Hypothesis and the search for Mathematical Infinity [2].
Formal systems aren't just about mechanizing mathematical work to make it a subject of software engineering. (Though this is a useful and probably underdeveloped outcome.) They also turn mathematical work itself into mathematical objects which one can reason about, which opens the door to metamathematics, i.e. the ability to reason mathematically about aspects of mathematics itself.
The stuff in that document is all very fundamental and basic, and doesn't seem that abstract at all to me. If you want to do mechanical theorem proving, you should be(come) familiar with that stuff. Personally I like set theory as a basis, because it is so simple and intuitiv.
Obviously, professional set theorists go way beyond what is described in this document, but to be honest, none of that more advanced stuff seems very useful to me in practical applications in technology.
> The stuff in that document is all very fundamental and basic, and doesn't seem that abstract at all to me.
I wondered the same thing. The first half of the linked paper is usually contained (if abridged) in the intro chapter of more or less any algebra/topology/analysis textbook. I don't remember Rudin's first book being an exception. Most of that material can be found in the first chapter of Rudin's "Principles" (3rd edition).
I'd say the basic topology in R^n is much more abstract (the 2nd chapter of Rudin "Principles" [3rd ed]) and it's not treated in the linked doc.
Set theory basically is the same thing as type theory, these are two schools of formal languages designed to define logics (something which is even more true for this book that seems introductory). I don't really like set theory (imho it's a pain to use rigorously formally) but type theory is cool. If you've already used typed languages in the ML/haskell family you will have a lot of intuition in constructive logic (formal reasoning in type theory is literally the same thing as programming). Shallow applications of type theory are numerous in recent developments of compilers (using types, rust can manage memory correctly and statically; in general types are a good channel between humans and compilers for high level and structured information about what they are writing, leading to tons of systematical optimizations).
So: all kinds of typed programming languages with non trivial features being effects (monads&modalities, linear logic), structuring/interfaces (parametrized modules, type classes, translucent types, ..) or interactive theorem proving.
Tbh: I didn't read the document, only the table of contents.
I think one of the problems of set-theory and applictions is that it doesn't lend itself very well to computers. An immediate thought was software verification & theorem proving, but while it's used in modelling (for example Peano Numbers), the underlying theories should (all?) be type-theories.
But that's something i am definitly not an expert in. Maybe someone with real knowledge can help answer this!
Of course everybody interested in math & it's applications should know really basics set-stuff. But I think, while foundational, set theory is just not that important to be super proficient in! You could write all the required stuff on a single sheet of paper.
> I think one of the problems of set-theory and applications is that it doesn't lend itself very well to computers.
I might be misunderstanding what you mean here, but I'm confused by this statement. Set theory is used very often in computer science. Relational databases are essentially applied set theory.
The main reason why type theory is used for mechanized theorem proving is that developments in that area are mainly driven by computer scientists, not mathematicians. And computer scientists like their formal objects to look like programs.
Type theory (or more generally, constructive mathematics) has some really nice properties, for instance guaranteeing computability, and allowing the extraction of proofs to programs. A lot of recent developments in the area were in fact driven by the late and great Voevodsky, a Fields medalists who certainly qualifies as a "mathematician". In my experience a lot of mathematicians who care about foundations, such as logicians, also like constructive maths. It's just analysts who don't, because they care about how well their foundations supports doing analysis, not about nice it is in some abstract sense, and constructive analysis is harder (because of the lack of the law of the excluded middle, which makes it harder to make the real numbers "complete", as you can't just pull out an arbitrary number between a and b without some way to construct it).
As someone whose education straddled the four-pillars pure topics, mathematical foundations, and computer science, I share a bit of a wariness about AC with the constructivists, but I cannot let go of LEM. It's a big part of how I reason, and a number of results that I consider important and illuminating hinge upon it.
My (not entirely informed) sense is that constructive mathematics would have more success if more of an effort was made to pose it as a readily instantiable, tagged embedding within the larger classical mathematical corpus. Less Brouwer-style dogma about LEM being "wrong", more emphasis on streamlining a pathway for results to travel from the mathematical journal literature into software design specs.
Note that axioms are usually added, removing is weird. As such i see it mostly the other way around: there is a classical fragment inside constructive logic (this should not contradict what you said tho). In constructive logic, for a proposition `P \in Set` the proposition "P is in the classical fragment" is `dec(P) := P \/ P -> 0` (`0` being the empty type). As such we can define the set of classical propositions: `CL := exists P \in Set, dec(P)`, eg couples of sets together with either an element of it or a function that maps its elements to an inexistent thing (absurdity).
LEM is not wrong, it's just not always true. You must be familiar with the fact that AC is not provable in ZF, as such isn't LEM kinda philosophically wrong for AC? Not forcing LEM to be true is useful to have a better (more generic) understanding of complicated things. In general restricting logics (eg linear logic, constructive logic, etc) is only making them more powerful: we are able to express more propositions and we can internally (or externally as in linear logic with `!`) define the fragment for which the old axioms were valid. Pure languages enable explicit talk about computation, linear languages enable explicit talk about memory. Restriction are about increasing expressivity and most of we time we build them in the way that they retain the same proof-theoretic power.
Didn't you mean "implicit" here instead of "explicit"? Because I have no trouble at all to talk explicitly about memory and computation in full-blown set theory.
>It's a big part of how I reason, and a number of results that I consider important and illuminating hinge upon it.
How many of these results require applying the LEM to undecidable propositions? The LEM works fine in constructive logic for provably-decidable propositions.
It's just that I usually don't care about the decidability of propositions I apply LEM to, so I don't want to have to prove something first I don't care about.
This. I would also add that a substantial part of the type theory crowd are category theorist (or have developed acquaintance as category theorists are taking over type theory), which also do qualify as mathematicians (even tho they probably also qualify as haskell programmers). Discrete maths (graph stuff, algorithmic, algebra) and logic (languages, computability, crypto) pretty much are the definition of what constitutes theoretical computer science.
> even tho they probably also qualify as haskell programmers
For what it's worth, Haskell has very little to do with actual category theory. Even though terminology like "monad" is used, Haskell doesn't have much more overlap with higher mathematics than any other programming language. Mathematicians working on category theory and software engineers working on Haskell don't have much overlap at all.
This is kind of frustrating because it's a meme that Haskell is very mathematical. It borrows concepts from category theory, yes. But it's not anymore mathematical than calling a GET request "idempotent" is mathematical.
I didn't say haskell is mathematical (and i do agree to your point), i said category theorists probably know haskell of all languages because of the deeper understanding/intuition they will naturally have for it. There is no denying that haskell is laid on category theory foundation and that understanding these will help you. But of course well designed things do not require you to understand their foundations to build intuition/understanding.
No, I understand what you're saying. I am saying category theorists will not have any special affinity for Haskell or vice versa. I also don't agree that Haskell is built on top of category theory. That's why I mentioned the example of idempotence.
The idea that Haskell has much to do with mathematical category theory is a meme that arises from the cargo cult of functional abstraction. It's an example of terminology overloading. There is this pattern of taking terminology from abstract algebra and category theory and using it to describe design patterns in functional programming or certain paradigms of engineering. "Monad" is one you see in Haskell which ostensibly makes it related to category theory. Another example is the idea of an "isomorphism" in web development, which doesn't mean the same thing as it does in algebra.
There's no problem with using terminology borrowed from mathematics for design patterns in engineering (and vice versa), but it should be transparent about the fact that it doesn't confer any special relation to the underlying mathematics. A category theorist and a Haskell programmer have different conceptions of what a "monad" is, and any functional design pattern modeled after an idea in category theory is going to be covered in the introduction or first chapter to a textbook. The language has more to do with lambda calculus than it does anything in actual category theory.
> call me a mathematical hipster (new frameworks etc.) but i could never get into axiomatic set theory. algebra is already boring by virtue of being abstract but then make it even more abstract by dropping all models!
can you elaborate here? what is a mathematical hipster, and what are “new frameworks”? and what do you mean by algebra “dropping all models”? i am confused by what these mean.
algebras are about studying various means of combination. given some set, how can we combine two things from that set to get another thing in the set? what are the structures that arise as we explore that question? i highly recommend reading the expository chapter one of pinter’s abstract algebra book.
Looking at the table of contents, it looks like the first half is really foundations of abstract algebra, like you really caant rigorously talk about it or analysis or anything else without this stuff.
The first half of (section one) doesn't comprise the foundations of abstract algebra in particular so much as the foundations of modern mathematics in general. The foundation of modern abstract algebra is more about axiomatic properties of structures like groups, rings, modules, fields, vector spaces, etc.
For that matter you actually don't need a lot of this machinery to derive analysis; for that all you need is a minimal subset (hah!) of set theory. Analysis focuses on the properties of continuous things, like real and complex sequences. You can derive the real and complex numbers axiomatically (like with Dedekind cuts of rationals) or synthetically (field axioms) as long as you have the definitions of sets, subsets, bounds, unions and intersections. I'm not even sure you need to care about the distinction between a subset and a proper subset.
That's not to say the material presented here isn't useful - it is. I'm just making the point that most analysis doesn't require abstract algebra aside from fields and (later on) vector spaces, and you don't need a whole lot to get there.
My favorite critic of ZFC is Norman Wildberger, who has a great YouTube channel. I found his formulations of complex multiplication, the pythagorean theorem and quadratic forms were particularly worthwhile, even though he comes off as barely computer literate in other videos.
But no, seriously, it depends on what kind of set theory you're talking about. The stuff Hilbert was arguing about up to the stuff still being argued about today in the ivory tower? No, you pragmatically do not need to know that unless you're pursuing a research career in pure math. The stuff in an undergrad probability course? Yeah, sets come up basically everywhere if you're looking and are willing to think like that.
Even if you're researching pure mathematics you probably don't need rigorous, axiomatic set theory. Number theorists, algebraic geometers, probability/measure/ergodic theorists, etc all can conduct their work with just naive set theory.
In a world where it's taken for granted that, say, countable additivity over measurable sets will work work out fine, and is felt to be a different beast than arbitrary use of a separation schema plus a set of all sets, sure. But it took early 20th-century mathematical logicians decades of exploration to get to a point where there was confidence that people doing the former thing could get by mostly "naively", without being in danger of doing the latter thing. (See e.g. Galileo's paradox.)
Modern pure mathematicians benefit from being able to conduct most of their reasoning on the lower footsteps of the von Neumann hierarchy (V_{omega + k} for small k), where they get access to higher-order tools like function spaces and topologies, countability as a fruitful property to impose and work with, and a playground where most of the disparate branches of mathematical research can be brought together without worrying about bindings between underlying frameworks. None of these things require a deep knowledge of the metamath of ZF to work with, but they'd all make many 19th-century mathematicians nervous.
Note also that the most frequently used tools of naive set theory today (e.g. unions/intersections, separations/replacements, powersets, cartesian products/relations/functions, a few notable infinite sets) roughy mirror ZF. That shouldn't seem like an accident. Having working mathematicians trained to think and organize their work in these constructs keeps them around those lower footsteps.
I learned this years ago and forgot everything. I'm curious - does anyone actually use linear algebra in their day to day, and if so, what's your job title?
Aerial Imaging. We use it for both photogrammetry and some aspects of image color correction based on its histogram. I imagine its used extremely often in robotics as well.
It is a source of endless wonder and bemusement to me that we did not choose geometry as our basis for logic and proof over the equality symbol. Many brilliant minds are barred from entry into the field of mathematics because of this error.