Hacker News new | past | comments | ask | show | jobs | submit login
There's more to mathematics than rigour and proofs (2007) (terrytao.wordpress.com)
195 points by haraball 8 days ago | hide | past | favorite | 98 comments





> The distinction between the three types of errors can lead to the phenomenon ... of a mathematical argument by a post-rigorous mathematician which locally contains a number of typos and other formal errors, but is globally quite sound, with the local errors propagating for a while before being cancelled out by other local errors

I was initially amazed at this when I was in graduate school, but with enough experience I started to do it myself. Handwaving can be a signal that someone doesn't know what they are doing or that they really know what they are doing and until you are far enough along it is hard to tell the difference.


>Handwaving can be a signal that someone doesn't know what they are doing or that they really know what they are doing and until you are far enough along it is hard to tell the difference.

I found it's very easy to distinguish these two when you have another expert ask questions. But if you don't have someone like that in the audience it might take forever. Or at least until you become an expert yourself.


“Learn the rules like a pro, so you can break them like an artist.” - Pablo Picasso [0]

[0]: https://www.goodreads.com/quotes/558213-learn-the-rules-like...


Good point. Here are some notes on it based on what I've observed happens in Academia and in other environments:

I think handwaving comes in different flavors:

- Handwaving and not knowing what they are doing, when they know they don't know:

This is arrogance and/or fear of people thinking you are a fool. Bad practice. Professionals who do this are status chasers and not fun to be around. Students who do this are mostly insecure, and they might just need some help with their self-esteem. Help them by letting them feel comfortable with being wrong. Foster a good environment so that the arrogance and fear fade away.

- Handwaving and not knowing what they are doing, when they don't know they don't know:

I believe this is a good thing, in particular for Students, if they are within a nurturing environment. It can lead to interesting ideas and to discussions of innovative ways to move forward. I believe this to be a way of actually "training your intuition muscle" both for Students and Professionals. It lets them know not to fear moving on, tackling the thing that captures their attention the most at first, and later on filling some of the gaps, which I feel is common practice for people who have been working on the field for a while. However, if the gaps are left unattended it can lead to bad things... Environment matters.

- Handwaving and knowing what they are doing, when they know they don't know:

For trained Professionals only... :) This modality kinda kicks in when deep in mathematical work. It's the path that leads to the Eureka moments... Pure trained intuition acting almost as a separate entity to oneself. We are facing the unknown and something tells us that certain aspect can be handwaived, we don't fully know why but we feel it is. Later on it becomes clear why we could do the handwave. It works itself out.

- Handwaving and knowing what they are doing, when they don't know they don't know:

For trained Professionals only... Kind of a stretch, but might be where our intuition either fails us completely, or completely takes us by the hand to turn the unknown unknowns into known unknowns, then it goes back to the previous category.

This isn't set in stone by the way, just some thoughts I had while reading the article...

Any ideas or suggestions for modifications more than welcomed.


Did anyone whisper in your ears, “Welcome to the dark world!”?

“Before I learned the art, a punch was just a punch, and a kick, just a kick. After I learned the art, a punch was no longer a punch, a kick, no longer a kick. Now that I understand the art, a punch is just a punch and a kick is just a kick.” - Bruce Lee

For the interested, the original Dōgen zen koan goes something like this — Before I began to practice, mountains were mountains and rivers were rivers. After I began to practice, mountains were no longer mountains and rivers were no longer rivers. Now, I have practiced for some time, and mountains are again mountains, and rivers are again rivers.

And Dōgen is quoting a Zen (Chan) Buddhist Qingyuan Weixin [Seigen Ishin].

I came here looking for this Zen Koan

I believe this is also tied to La Subida del Monte Carmelo from San Juan de la Cruz. I'm oversimplifying, but basically it goes like this:

As San Juan climbs Monte Carmelo, he finds nothing at the base of the monte, then he finds nothing at the middle, but then, at the cusp, he finds Nothing (capital N Nothing).

See the following image for reference: https://commons.wikimedia.org/wiki/File:Monte_Carmelo_Juan_d...

There's quite a bit of parallels between proper "Catholic Mystics" and Zen teachers...

I highly recommend both the San Juan de la Cruz works, in particular Ascent of Mount Carmel and The Dark Night, along with The Cloud of Unknowing, which was an inspiration for him.

For those curious about learning more about Koans, I cannot recommend this other book highly enough: https://www.amazon.com/Two-Zen-Classics-Gateless-Records/dp/...

The book contains a collection of Koans along with Mumon's (et al) Commentary, Mumon's Verse, as well as modern day notes that help us understand some of the concepts hidden behind what looks like "poetical nonsense" at first, as well as giving the context for historical and mythological figures that are mostly unknown for those "outside the loop." What I love about the notes is that they still leave you with the opportunity to explore the koan further, properly, so they don't really take away all the fun.

Example Koan from the book:

##########################################################################

Case 4 The Western Barbarian with No Beard

Wakuan said, "Why has the Western Barbarian no beard?"

Mumon's Comment:

Study should be real study, enlightenment should be real enlightenment. You should meet this barbarian directly to be really intimate with him. But saying you are really intimate with him already divides you into two.

Mumon's Verse:

Don't discuss your dream before a fool. Barbarian with no beard Obscures clarity.

NOTES (abridged)

- The Western Barbarian: The Western Barbarian stands for Bodhidharma, who brought Zen to China from India. He is always depicted with a beard. The case therefore means, "Why doesn't Bodhidharma, who has a beard, have no beard?"

- Meet this barbarian directly: This is not really a meeting but a becoming. You should yourself become Bodhidharma. Then if you have a beard, Bodhidharma has a beard; if you have no beard, then neither had Bodhidharma. But can you say that you are in truth Bodhidharma?

[...]

- Obscures clarity: Words, concepts, and other inventions of the mind only obscure the truth. Do not cling to shadows, but catch hold of truth itself.

##########################################################################

You get the idea.

Enjoy!


People have pointed out similarities to a few things in this thread now. It's pretty much the bell curve meme: https://knowyourmeme.com/memes/iq-bell-curve-midwit

man, sometimes things are obvious and right in front of us.

I've known the Dogen saying for years. Have even meditated on it.

I've been enjoying Bell Curve meme for sometime. Very funny.

Never put together that these were the same thing.

Today I am awakened.


At first, the Dogen saying and the bell curve meme seem like different things, then..

first there is the bell curve,

then there is no bell curve,

then there is the bell curve...

If you learn anything significantly deeply, this is a repeating pattern.


The bell curve meme is a static taxonomy of people, and if the link you posted is correct, its origins were explicitly political. The version described by Bruce Lee, and anybody who has achieved a high level of skill, is about the process of learning and mastery.

In the bell curve version, the wisdom of grug brain and the wisdom of the monk are presented as equal, and the struggle of the midwit is framed as a pretentious, unnecessary aberration. The lesson it teaches is, don't seek out knowledge and new ideas. Education confuses the "midwit" people who are smart enough to partially grasp it but not smart enough to see through it like the monk.

In contrast, the "a punch is just a punch" version frames the conceptual struggle as a necessary phase in a process that leads to mastery. The beginner cannot engage directly with the simplicity of the master, so the beginner must engage through concepts and through practice. The more they do so, the more simple things begin to feel.

Since this started with a Bruce Lee quote, we can use him to see that it is not just a linear process that passes through conceptual education and ends in mastery. That leads to a dead end, because mastery can only be complete in a limited context. Bruce Lee kept searching outside his zone of mastery to find ways to get better. He studied techniques from other martial arts and fighting sports, even though in doing so he had to engage at a conceptual level since he had not mastered those arts.

For example, his art included trips and throws, and he was a master of his art. Yet when he got the chance later, he practiced judo with expert judoka. To do so, he had to back off from "a trip is just a trip, a throw is just a throw" and learn the techniques of judo. I don't think anybody has ever suggested he was a master at judo, which suggests he had to be engaging with it on a conceptual level. Yet he believed that his practice with judo improved the skills he had already achieved mastery at.

Not only did Bruce Lee preach constant assimilation of new ideas, he also preached simplification by discarding what is not useful. If a punch is just a punch, what do you discard? The whole punch? In order to find something to discard, you must look past the apparent simplicity of the internalized skill and dissect it conceptually.

In this view of things, conceptual thinking is not just a phase you go through on the way to mastery, but rather a complementary way of engaging with a skill. It is a tool for refining and elevating your intuitive mastery. Simplification and desimplification are the tick-tock of learning. A "mastered" skill is not like a video game sword that, once forged, always has the exact same stats, but is more like a Formula 1 car that is continually disassembled, analyzed, and rebuilt.

The bell curve meme does occasionally get used to express a linear ignorance-struggle-mastery story of learning, but its origin and most common use is to caricature the pursuit of knowledge as pretentious foolishness.


Is this a subtle joke on the discussion?

Is this a joke by mimicking someone on the mid-point of the 'bell curve meme' explaining the 'bell curve meme'?


If you want a shorter version, "grug brain, monk brain" pretends to profound, like "Zen mind, beginner's mind," but it's just a lazy take. If "grug brain, monk brain" was poetry, it would say:

  We shall not cease from exploration
  And the end of all our exploring
  Will be to arrive where we started
  And it'll look exactly the same because travel is pointless.

This is the best meme ever / So accurate in many ways

The bell curve meme classifies, and it classifies the classifier.

There's a quote I've heard attributed to Einstein: "Now that the mathematicians have invaded relativity, I myself don't understand it".

You can see people go through that process right here on HN, slowly realizing that 1 the integer is the same as 1 the real number.

That thread felt like I was talking crazy pills. So many people confused by the difference between the construction of numbers using some particular set of foundational axioms and the properties of numbers that should hold true _regardless of the constructions_. Obviously the "integer 1" is not strictly speaking "the same as" the "rational number 1" when constructed in set theory, but there's a natural embedding of the integers into the rationals that preserves all the essential properties of the integer 1 when it's represented as the rational number 1. Confusing the concept with the encoding, basically.

I hope they aren’t all realising zero isn’t zero somewhere as well. I could really do without that headache today.

> One can roughly divide mathematical education into three stages:

Similarly with programming.

1. Write programs that you think are cool

2. Learn about data structures and algorithms and complexity and software organization.

3. Write programs that you think are cool. But since you know more, you can write more cool programs.

If things are working as they should, the end stage of mathematics and programming should be fun, not tedious. The tedious stuff is just a step along the way for you to be able to do more fun stuff.


It's kind of like how people who are really, _really_ good at something approach it with a certain simplicity and straightforwardness. Superficially, it looks like how a novice would approach things. But look under the covers they are doing similar things but with a much deeper understanding why they are doing things that way.

Example, (1) You start programming with the simplest abstractions and in a concrete way. (2) You learn about all the theory and mathy stuff: data structures, algorithms, advanced types, graphs, architecture, etc. Eventually you become very skilled with these, but at a certain point you start to bump up against their limitations. Technical disillusionment and burnout may set in if you are not careful (3) You return to using abstractions and architecture that are as simple as possible (but no simpler), but with a much deeper understanding of what is going on. You can still do very complex stuff, but everything is just part of a toolbox. Also, you find yourself able to create very original work that is elegant in its seeming simplicity.

I've noticed the same thing in other fields: the best approach their work with a certain novice-like (but effective) simplicity that belies what it took for them to get to that point.


Or alternatively:

1. Programming in very concrete/practical terms because you do not know how to think in precise and abstract terms (do not know math)

2. Thinking more precisely and abstractly (more mathematical way)

3. Only do some key important abstractions, and being a bit hand-wawy again in terms of precision. The reason: important real-world problems are usually very complex, and complex problems resist most abstractions, and also being totally precise in all cases is impossible due to the complexity.

All-in-all it is due to increased complexity in my opinion.

Example: 1. Writing some fun geometry related programs 2. learn about geometry more seriously 3. write software based on a multiple hundred thousand line CAD kernel.

Other example: 1. Write fun games on C64 2. Learn about computer graphics in University 3. Contribute to the source code of Unreal Engine with multiple million lines of code with multiple thousand line class declaration header files.


This is true but I think it's iterative, cyclic. It applies to any art and craft, really. You alternate between perceiving and projecting, receiving and creating.

Any skill really. You alternate between theory and practice.

For example in sports you play for fun, then do some coaching to get better, then play for fun using your new skills and so on.


Yes -- I also wonder if a description involving learning plural software languages might fit:

1. Hack programmatic-functionality in a first language

2. Master the intricacies of a first language, understanding all programmatic concepts through the lens of that languages specific implementation-details. Pedantically argue with those familiar with different language implementations, due to a kind of implementation-plurality/ essential-form blindness

3. Learn additional languages, and 'see past' specific implementation details and pitfalls of each; develop a less biased understanding of the essence of any task at hand


> 1. Write programs that you think are cool

> 2. Learn about data structures and algorithms and complexity and software organization.

> 3. Write programs that you think are cool. But since you know more, you can write more cool programs.

Hegel :-)


Also (in C++ lingo):

1. Start by writing programs with vectors and maps.

2. Learn all about data structures, algorithms, cache misses, memory efficiency etc

3. And then write programs with vectors and maps.


> 3. And then write programs with vectors and maps.

But the maps this time are absl::flat_hash_map (or another C++ alternative hash map such as Folly F14, etc) instead of std::map (or even std::unordered_map).


Also in Haskell:

1. Start by doing everything in ReaderT Env IO

2. Learn all about mtl (or monad transformers, free monads, freer monads, algebraic effects, whatever)

3. Do everything in ReaderT Env IO


> 3. Write programs that you think are cool. But since you know more, you can write more cool programs

The integration phase goes much deeper. The first stage is about learning how to write programs. The second is about writing programs well. The third is to intuitively reason about how to solve problems well using well-written programs; you can still code, but it's no longer where the lifting is.


I love how well-spoken Tao is. I've enjoyed lots of his lectures before; even if you're not an expert in whatever he's discussing he knows how to explain it just right to get you up to speed as best as he can. His communication and math skills are phenomenal.

Yes! He’s a great counterexample to the popular view that mathematical/pure logical reasoning ability is negatively correlated (even zero-sum) with communication ability. Yes, there are people that are crap at one and quite good at the other… but you can’t make much of an inference when given one without the other.

Is this a popular view? I think mathematicians can be odd, but usually they communicate quite well. I think as far as popularization of their fields go, mathematics is probably doing the best out of the lot: numberphile, 3blue1brown etc.

The examples you list are not known as mathematicians; they’re popularisers who (sometimes) happen to have qualifications and a history of studying the subject. 3B1B is absolutely brilliant but Grant Sanderson is not a ‘mathematician’ in the sense of someone who does research in mathematics.

Ironically, the fact that mathematics popularisation is as visible as it is is itself a sign of how much it is needed and therefore how unpopular and misunderstood the subject is. Branches of science like, say, astrophysics don’t need popularisation; people already think they’re cool.

The view of ‘people who are good at mathematics’ being bad at English is a relatively common one, in my experience. At least at the level of university students. People think there’s some sort of conservation of ability or equilibrium in the universe that means that if you have a ‘maths brain’ then you’re no good at much else, and vice versa. If anything, I think there’s a positive correlation between mathematical and communication ability — after all, mathematics is basically just the science of clever notation and clear-headed thinking.


The first time I really felt I understood math in depth was my uni linear algebra course. Distance and orthogonality were replaced with a more abstract but better inner product. It behaved like an IT interface: As long as some basic properties were fulfilled, aal of linear algebra came along. Half o the examples were the usual numeric vectors and matrices, the others were integrals, etc...

I didn’t get much out of linear algebra. It felt too computational. I only really got it once I “relearned” it as part of abstract algebra

There is a big range of approaches out there for teaching linear algebra. I really enjoyed my quite abstract linear algebra course, but there were a lot of other kids in it that really struggled and did not get much out of it. Takes all kinds.

“The point of rigour is not to destroy all intuition; instead, it should be used to destroy bad intuition while clarifying and elevating good intuition. It is only with a combination of both rigorous formalism and good intuition that one can tackle complex mathematical problems; one needs the former to correctly deal with the fine details, and the latter to correctly deal with the big picture. Without one or the other, you will spend a lot of time blundering around in the dark.”

Well put! In empirical research, there is an analogy where intuition and systematic data collection from experiment are both important. Without good intuition, you won’t recognize when your experimental results are likely wrong or failing to pick up on a real effect (eg from bad design, insufficient statistical power, wrong context, wrong target outcome, dumb mistakes). And without experimental confirmation, your intuition is just untested hunches, and lacks the refinement and finessing that comes from contact with the detailed structure of the real world.

As Terry says, the feeling of stumbling around in the dark suggests you are missing one of the two.


The worst thing is when someone who thinks that "math is 100% infallible and all about rigor, you gotta show your work and include all the steps" yet they think that set theory is good enough and it doesnt have problems

they say things like "Everything in math is a set," but then you ask them "OK, what's a theorem and what's a proof?" they'll either be confused by this question or say something like "It's a different object that exists in some unexplainable sidecar of set theory"

They don't know anything about type theory, implications of the law of excluded middle, univalent foundations, any of that stuff


My favorite was when a manager tried to get me to agree with the statement that "math was just for the numbers right?". Meaning not character strings nor dates. I was dumbstruck by the question.

Math is for.. numbers? Thats engineer talk right there

It's 100% possible to base logic and proof theory off of set theory. For example, you can treat proofs as natural numbers via Gödel encoding (or any other reasonable encoding) and we know that natural numbers can be represented by sets in multiple different ways.

You may prefer type theory or other foundations, but set theory is definitely rigorous enough and about as "infallible" (or not) as other approaches.


Maths, when done correctly, _is_ 100% infallible by its own design. It's just that reality isn't obliged to play by your rules =P

Modern set theory is sufficient for most mathematicians. That other stuff is interesting, but you can do great mathematics without it.

Yeah, they should have heard about ZFC and have a notion what a formal proof is. On the other hand, I'm not sure your last sentence is really that relevant.

> They don't know anything about type theory, implications of the law of excluded middle, univalent foundations, any of that stuff

I'm doing a PhD in algebraic geometry, and that stuff isn't relevant at all. To me "everything is a set" pretty much applies. Hell, even the stacks-project[1] contains that phrase!

[1] https://stacks.math.columbia.edu/tag/0009


> I'm doing a PhD in algebraic geometry, and that stuff isn't relevant at all.

Yes, exactly. These are topics that 99% of legit mathematicians don't know or care about.

It's like saying "I'm a computer expert" when you only know Python, and then a computer engineer that designs CPUs starts laughing at you


I don't understand the point of your comments.

My claim is that mathematics as practiced by mathematicians is not as rigorous as they think it is, and infact they're not even aware of the advances in rigorous mathematics that they're not using. Even though those advances are super important.

What important discoveries have you made using your supposedly "more rigorous" mathematics?

Loaded question

Well, you've been claiming that these advances are "super important" and that set theory is not rigorous, but you have provided no evidence for either claim.

I never said "set theory is not rigorous." Euclid wrote Elements without knowing anything about set theory. Math was done for thousands of years without modern set theory or any modern notion of logical foundations. Set theory is more rigorous than what came before it.

It's not as rigorous as type theory (yes, this is an umbrella term) because type theory can be verified by a computer. Homotopy type theory is an example of the type of math that set theory can't handle

There are so many layers of ignorance to unpack here and I don't care to be your unpaid tutor


> because type theory can be verified by a computer

proofs in FOL can be checked by a computer without any need for type theory - just look at metamath.


I want to be Terrance Tao when I grow up

I think the ship sailed when you were 4..

At this point, Terence Tao is already worthy of a list of facts, much like those about Chuck Norris and Bruce Schneier.

For example:

When Terence Tao solves a problem, the problem appreciates the solution.


Looking at his success it's hard to not believe that some people are objectively better than others.

Whether some people are objectively better at maths and communication than others, and whether they all get equal treatment under the law are two different things, right? Right?

(I don't know where you grew up; where I grew up we were always obliged to chant "with liberty and justice for all")


Right.

Nobody who has a grasp on basic biology and a honest mind wouldn't believe that. The thing is this completely goes against the liberal "tabula rasa" worldview.

I wish people had more exposure to building mathematical models of things. I am fairly convinced that the only real exposure I was given was to models that we knew worked. So much so, that we didn't even execute many.

Specifically, parabolic motion is something you can obviously do by throwing something. You can, similarly, plot over a time variable where things are observed. You can then see that we can write an equation, or model, for this. For most of us, we jump straight to the model with some discussion of how it translates. But nothing stops you from observing.

With modern programming environments, you can easily jump people into simulating movement very rapidly and let people try different models there. We had turtle geometry years ago, but for most of us that was more mental execution than it was mechanical. Which is probably a great end goal, but no reason you can't also start with the easy computer simulations.


Something I really like is that the curve that a rope or thread makes when fixed in two points but not under tension, it's not a parabola. It really looks like one though, but it isn't. It's a catenary.

That's something you can verify by writing some simulation code, then drawing the curve, and then drawing the best matching parabola on top. It doesn't fit.

To model the issue mathematically you need some not-too-advanced calculus. On both the computer simulation and the mathematical model, you model the rope as being made of very small elements that are linked together (like a chain). In the simulation those elements are small, but finite. In the math you take the limit as the volume of the element tends to zero.

It's the same way of thinking but math gives some different tools, enabling you to solve the curve analytically


That's totally a tangent but I was reading a bit and it happens that, in practice, real cables bend in an inbetween curve between catenaries and parabolas

https://en.wikipedia.org/wiki/Catenary#Catenary_bridges

> Comparison of a catenary arch (black dotted curve) and a parabolic arch (red solid curve) with the same span and sag. The catenary represents the profile of a simple suspension bridge, or the cable of a suspended-deck suspension bridge on which its deck and hangers have negligible weight compared to its cable. The parabola represents the profile of the cable of a suspended-deck suspension bridge on which its cable and hangers have negligible weight compared to its deck. The profile of the cable of a real suspension bridge with the same span and sag lies between the two curves. The catenary and parabola equations are respectively, y = cosh x and y = x²( (cosh 1) − 1) + 1

https://www.quora.com/How-do-you-tell-the-difference-between...

> If the chain is carrying nothing other than its own weight, the resulting shape is a "catenary". If the chain is like a suspended cable carrying a deck below it, and its own weight is nothing compared to that of the deck, the resulting shape is a "parabola".

Which shows that sometimes your model (either using pure math or a simulation) is too simple to capture whatever is going on in the real world. (it gets further complicated when one considers elasticity etc)


Exactly! I think this scenario alone would be an amazing set of lessons for many grade schools.

Move this into modeling and then guessing stuff like bridge tensions, and you can easily show what many of the maths are good for.

We used to have this with the attempts at building tooth pick bridges and such. Which I still think is very illuminating. But, I think there are a lot of questions you can expose with models that were often only seen by the more advanced students. And again, I agree that getting people to mentally model these things is a good goal. Right now, people rarely ponder things on paper, it seems.


This argument is very close to one by Whitehead in an essay called "The Rhythm of Education". The stages back there are called Romance, Precision, and Generalisation - but I'd argue there is an isomorphism (in a suitable category) between that and Tao's three stages.

> The point of rigour is not to destroy all intuition; instead, it should be used to destroy bad intuition while clarifying and elevating good intuition.

This is a key insight; it's something I've struggled to communicate in a software engineering setting, or in entrepreneurial settings.

It's easy to get stuck in the "data driven" mindset, as if data was the be-all and end-all, and not just a stepping stone towards an ever more refined mental model. I think of "data" akin to the second phase in TFA (the "rigor" phase). It is necessary to think in a grounded, empirical way, but it is also a shame to be straight-jacketed by unsafe extrapolations from the data.


> It's easy to get stuck in the "data driven" mindset, as if data was the be-all and end-all, and not just a stepping stone towards an ever more refined mental model.

Yes. "Data driven" either includes sound statistical modelling and inference, or is just a thiny veiled information bias.


rigour is is not about destroying bad intuition, but rather formalizing good intuition, imho. The ability to know good from bad is somewhere in-between total newb and expert.

I think entire research subfields can go through a similar process. Plenty of mathematics was done before mathematical rigor really existed. Then axiomatization became more and more important. The intuition never went away, but I have heard of 'Nicholas Bourbaki' (https://en.m.wikipedia.org/wiki/Nicolas_Bourbaki), the movement to right mathematics in purely formal language while eschewing intuitive language. And then more recently I read a prominent mathematician describing this phases having been a bit of a mistake. But maybe it was just a necessary part of the fields transition.

I've definitely gone through a parallel transition in physics, but replacing 'rigor' with 'calculation' and 'intuition' for 'physical intuition/simple pictures.' In physics there is the additional aspect that problems directly relate to the physical world, and one can lose and then regain touch with this. I wonder what other fields have an analogous progression.


> I wonder what other fields have an analogous progression.

"Before one studies Zen, mountains are mountains and waters are waters; after a first glimpse into the truth of Zen, mountains are no longer mountains and waters are no longer waters; after enlightenment, mountains are once again mountains and waters once again waters."


The kind of follows the standard midwit meme progression one sees in programming as well

1. Making stuff is fun and goofy and hacky 2. Coding is formal and IMPORTANT and SERIOUS 3. What cool products and tools can I make?

I feel like this pattern probably happens in many fields? Would be fun to kind of do a survey/outline of how this works across disciplines


Related:

There’s more to mathematics than rigour and proofs (2007) - https://news.ycombinator.com/item?id=31086970 - April 2022 (90 comments)

There’s more to mathematics than rigour and proofs - https://news.ycombinator.com/item?id=13092913 - Dec 2016 (2 comments)

There’s more to mathematics than rigour and proofs - https://news.ycombinator.com/item?id=9517619 - May 2015 (32 comments)

There’s more to mathematics than rigour and proofs - https://news.ycombinator.com/item?id=4769216 - Nov 2012 (36 comments)


"The intuitive mind is a sacred gift and the rational mind is a faithful servant.” - Einstein

The problem is though, that with half the data, your mind considers glueing another base to the seasaw, to balance things out and restore symmetry and intuitive beauty.

The idea of the 3 levels really resonated with the ideas in "Bernoulli's Fallacy" as well. Right now we are seeing a resurgence of Bayesian reasoning across all fields that deal with data and statistical reasoning. I think many errors of modern civilization were caused by people at a level 2 understanding attempting to operationalize their knowledge for others at level 1.

We need it to become much more common to operate at level 3, especially in fields like enterprise software development.


Good article, but should mention that it's not new. First copy in the Wayback Machine is from 2018, but there are comments all the way back to 2009.

https://web.archive.org/web/20180301000000*/https://terrytao...


his old articles are shared a lot on here

Tao... What can I say... Always great work.

Haven't read a single bad contribution from him. And I've read quite a bit...


I think one of our biggest problems in society (business, politics, etc.) is that Stage 1 superficially resembles Stage 3.

This means that many people in Stage 1 (or Stage 0, if that's a thing) believe that they're as good as Stage 3 thinkers. AKA Dunning-Kruger.

In other words, complete bullshit, confidently delivered, has come to dominate informality-born-of-rigor. And the audience can't tell the difference.


What? Americans learn proper calculus in later undergraduate years? Really?

Tao is Australian.

That said, yes, real analysis is often a third-year class.


Could modern AI help amateur mathematicians to build proofs?

I was trying to coerce gpt-4o to talk about the gcd and lcm in terms of sets of the prime factors where the product is the union of the sets, gcd is the intersection, and the lcm is the union less the intersection and it kept telling me I was incorrect and being "non standard".

It has a long, long way to go.


If you meant multiset, then you were correct. (Not that I expect GPT-4o to make the distinction.)

To an extent; they can give hints and suggest directions, but you need to treat them as an unreliable narrator: think of them as entities that can help or deceive you at random.

That being said, we are researching tailored LLMs and other architectures to assist mathematical research that are more geared towards accuracy at the expense of freedom ("imagination"). The Lean FRO has some related information and links.


Not sure why you're downvoted.

From a few days ago:

https://news.ycombinator.com/item?id=40646909


Is there?

Absolutely.

We have machines that can crank out true theorems, rigorously proven, all day. It takes a mathematician to know what is worth working on. And that is fundamentally an intuitive decision. Computers don't care whether a proof is interesting or not.


It’s a bit tautological since we are defining interesting as what human mathematicians work on. Perhaps if computers ran the show they wouldn’t agree with our definition.

Maybe it is a feedback loop, rather than a tautology? The things many mathematicians find interesting are the things that the general mathematician community is working on. And the way you become a mathematician is by publishing things that the community finds interesting enough to let through the peer review process.

Ultimately though this is all funded by, in the end, the belief that they’ll be able to dumb down the good stuff for us scientists, engineers, and other folks who build actual physical things when we hit the point that we need it. (Of course it is an exploration process so not everything needs to be directly applicable).

If computers ran the show, we would probably stop plugging them in if they used more power than their theories saved us, or whatever.


Regardless of what computers find interesting, humans want to progress math in stuff humans find interesting. We can't fully rely on computers to do that yet since they can't seem to judge that very well rn as good as human mathematicians.

Don't necessarily see a tautology here.


Think of pets. We, humans, run the show. There are some things the pets think are interesting because we humans are doing it, but by and large pets like what is dictated by their genes + individual preferences.

Math existed for thousands of years before modern rigor.

True. It’s also interesting to note that a lot of Newton and Leibniz’s original reasoning about infinitesimals itself went through the same sort of three step process: first it was accepted because it was all there was, then rigour became fashionable and it was thrown out. Finally, only last century, it was shown that such ideas can be made perfectly rigorous in the right setting [see: nonstandard analysis].



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: