It's basically totally impermeable. I'm almost done with an undergraduate degree in math and basically have no idea what ~90% of the research is about at anything other than a topical level. This is fine, it's written for specialists in the field (hopefully), but damn, for most fields, it's not a whole lot of people.
Compared to physics or chemistry, math gets very specialized, very quickly. Depending on the field, there's no easy real world isomorphisms either, making it even more difficult.
Better language and grammar around core concepts could improve accessibility, but few in the field that I've encountered seem to care about that.
Biology and chemistry are a bit better, but there are still improvements that could be made. You can't easily go from zero to biochem or ochem, and I don't think that has to be the case. Gen chem and gen bio just suck at presenting themes and concepts and instead focus on being a smorgasbord of facts (that you later have to amend or unlearn).
Computer science and electrical engineering are a walk in the park by comparison. We have all sorts of interactive tools and visualizations of algorithms and analysis. We keep refining and making the onboarding process easier and more accessible.
My impression often was that a lot of this stuff isn't so hard if you "get to it", but it's clouded in a lot of complicated language that makes it sound harder than it really is. I never dared to postulate that this is a more general problem of math, but your words sound like that's exactly what is happening.
Do you think there's a way forward to make math more accessible by using simpler language without sacrificing correctness?
One example I see frequently around here is the much-maligned monad tutorial. Writers keep trying to connect it by analogy to concepts with which imperative programmers are already familiar. As a math student, I would much prefer to be given a basic statement of the definition of a monad, a list of properties, and maybe a theorem to prove that some structure is a monad instead of having to fall back to the definition. From there, I would just work through a few exercises and start to understand what it is and how it works. Most people don’t work this way, however. People seem to be extremely resistant to learning by experimentation. Instead, they prefer to be given all the answers.
I suppose this isn’t really a problem that’s unique to math. We’ve also discussed how a lot of computer users fail to truly understand how the software works and instead prefer to follow steps by rote in order to carry out one particular task.
Aside: as with monads, group theory has many disparate applications, from solving Rubik’s cubes  to particle physics  to crystallography . Trying to teach someone group theory using only a Rubik’s cube may not be the best idea, in light of the abstract nature of the structure.
Of course not! If that programmer than goes out and tells somebody else that "monads are burritos", they'll have no idea what that vague, nearly nonsensical statement means. You need the context of grappling with the idea over time for an analogy to actually help.
Good textbooks are already extremely optimized for ease of learning; the authors of these books are universally people who started in your position, thinking that everybody would be so much easier to learn if it were just written in sensible language, usually the language they were happening to use when the idea finally clicked for them. And the result ends up hard to understand, because the concept genuinely is hard to understand.
It's a problem with academia in general. Academics don't get rewarded for explaining things in an easy to understand way. Moreover, it is often a huge amount of work to distill results down like this. And if you succeed the response you may get is "well that is obvious".
Anyway, this is my counter argument to the "formalization of mathematics": why don't we incentivize people to bring some clarity to the exposition? If mathematics is suffering from faulty proofs, then I have little sympathy. Make it simpler, organize the concepts. Muscular calculations only go so far.
e.g. look at this advice from the academia stackexchange, to a young aspiring mathematician:
"If you're relatively young and inexperienced and hoping for best results on the rapid publication of your work in strong journals, I would stick pretty mercilessly to the format: (i) strong introduction motivating your work and explaining clearly the value added both in the results themselves and the techniques of proof and (ii) the rest of the paper contains careful proofs of all the results, in a very clear, linear, easy to follow fashion, e.g. "Section A.B: Proof of Lemma C".
Survey articles are explicitly supposed to bring people up on the important results and current state of a subject.
I "only" have a masters degree in Math, but this sounds really wrong to me. Most advanced math concepts are complex and can take months to understand for someone with an unrelated math background (e.g., say, Lebesgue measure theory or Galois theory). I wouldn't even know how to start teaching something like that to someone who doesn't have a formal mathematical background.
I do agree that it's pretty hard to be rigorous without setting up all of this edifice, but I've also found that it's very easy (at least, as an applied mathematician) to take an idea and continue generalizing it until the original application becomes a very small corollary of the new statement—one that appears usually unmotivated and "magical."
In some sense, this is a wonderful thing: you can usually prove a much stronger statement using the same tools you used to prove the weaker one, but in some sense it makes many things unbelievably difficult to understand as a first or even second-time reader.
To give an example, I was reading a book on dissipative PDE-solving as an optimization problem. The book's main chapter starts with a theorem that is so general and so far removed from any one specific application that, even as someone who works on several related fields, I struggled to understand its significance. The book then spends the next several chapters showing how existence/uniqueness of solutions to a bunch of PDEs are all corollaries (with some work) of this theorem.
The problem I have is that the presentation would've been much clearer going the other way. Start with the easy-to-prove existence theorems for the PDEs you care about (which mostly relied on simple tools and could've been proven in a few lines) and build up the generalization to the "main theorem." There, the reader doesn't have to guess the meaning (and/or marvel at the "ingenuity") of the original statement since the motivation is perfectly natural and clear, and they certainly don't have to struggle through a proof without a clear ending. Plus, you usually get the nice side-benefit that, by the time you present the most general statement, the proof is almost always immediate ± some small tricks.
While I do think that math requires a large amount of definitions to work and to put everything on rigorous footing, I also think there's this kind of weird way of writing where definitions and proofs are presented as "magical" and then shown to be right or useful later, rather than the other way around. This is fine for a few things, but I've found that most presentations that work don't follow this template (yet most mathematical papers I've read sadly do).
For every complaint about a textbook, there is an equal and opposite complaint if it had done the opposite thing. At some point we have to accept that the underlying concepts inherently take time and effort to assimilate, that the easiest style to learn from is different for different people, and that the authors really did think about all these issues when they were writing. They clearly care about exposition, otherwise they would not have written a book in the first place, a thankless and financially useless task.
I am not, of course, saying that all textbooks (or papers) will be better served by this structure, but I think a surprisingly large amount are. This may also be a consequence of the field I'm in, as well, where most statements I've made have several natural generalizations which are all relatively straightforward to prove, but presenting them all under one large umbrella with a very general construction makes the end result much harder to understand.
Either way, both ways have their merits, but I've (along with several of my peers and editors) found one much harder to work with and read than the other. Though, I will again mention that this may be due to the specific fields I work in.
 I want to be clear that I'm also not advocating for the "big reveal finish" which many papers (and some terrible textbooks) fail miserably at. Neither of these mediums is a novel. But I am saying that some approach motivating the construction and building the picture in a natural way (instead of a here's the theorem, let's explain why everything is a trivial corollary of it) lends itself to more clear expositions.
Perhaps either presentation can be made clear, but I've found it's very, very difficult to make the latter work in a way that readers can easily understand, while the former falls almost naturally from most research notes I've written and keeps people more engaged.
Some people will, but they probably aren't the target audience for the book, and most people won't.
You don't teach kids who are learning to count about 0 by explaining group theory and identity elements.
Maybe it's obvious that sketching only a little formalism is the best. But then some of your statements end up vague, and students stumble on practice problems where they really need to understand the details of the formalism.
Maybe it's obvious that stating things as simply as possible is the best, even at the cost of generality. But then your statements won't be powerful enough when people try to go beyond the few simple examples you give, and people will feel betrayed.
Maybe it's obvious that a historical approach is the best, for motivation. But historical papers are extremely confusing to read. They're full of redundant checks of things we know are obviously true today, and archaic notation and jargon. People will get annoyed at slogging through ten pages of text to confirm something that is obvious in a modern formalism.
Maybe it's obvious that a modern, clean formulation is best. But then the definitions are unmotivated, seem to come out of nowhere, there's not enough connection to the historical development, and your book won't be compatible with older books.
Maybe it's obvious that no formalism at all is best; you just show how to solve problems concretely. But this turns what could have been a few simple principles into an enormous bag of ad hoc tricks people need to memorize. Textbooks that do this are derisively referred to as "cookbooks".
It is not obvious how to write a book.
I agree, and I also agree that readers will differ in which books they find easiest to read. Different people learn in different ways.
However, in some respects, it is obvious how not to write a book. Mathematics is not magically exempt from effective technical writing skills, nor all we have discovered about cognitive psychology and how people learn, nor anything else that goes into creating a generally good presentation.
Certainly some advanced math concepts require a lot of foundations and therefore take time to build up to them, but usually the individual steps along the way should be quite accessible if well presented.
To follow up on one of your own examples, suppose we have someone who understands the basic ideas of abstract algebra, say someone who had done a first course in topics like groups and rings as part of their undergraduate studies. (I note in passing that even a keen school student considering studying maths at university might be able pick up these concepts with good presentation in a few days.) Then suppose that person are curious about why there is a quadratic formula but people claim that not only do we not know of any corresponding quintic formula, in fact we can prove that no such formula is possible.
Explaining the concepts and key results from Galois theory needed to get to that proof shouldn't take months. There are lots of technical terms like solvability by radicals and field extensions and quotient rings and so on, but if these concepts are properly explained, none of them would in itself be more than a typical undergrad student could understand after a few minutes. Building up the argument would of course take longer than understanding any given step, but it's not prohibitively difficult or complicated.
Unfortunately, in many university lectures and textbooks, those concepts are introduced using only a mixing pot of other technical terms and notation, without any sort of motivation or examples to put them in context. There's no intuition behind them. There's no frame of reference. So then of course the final proof of the unsolvability by radicals of a general quintic just feels like word and symbol soup that you have to learn by rote, because how could anyone possibly have derived it in any sort of deliberate or systematic way? And worse, not only does this result become "magic", the student lacks the ability to use the mathematical tools that support the result for any other purpose either.
A toddler could understand the difference between a smooth curve and a spiky curve, but to actually get a handle on them computationally, the best tool that we have is Calculus -- and even after decades of people iterating on how to teach it, Calculus is still a mystery to most undergrads who muddle through it.
Similarly I could give you some intuition about manifolds by telling you that they're kinda like surfaces that can have higher dimensionality, but until you're able to actually understand and work with the definition of "locally homeomorphic to R^n", you're not going to be able to do anything with them.
I think it's bollocks, this idea that mathematicians are obfuscating easy concepts with complex language. The language doesn't exist to get research grants and gatekeep, it's there because you need some succinct way to describe objects and concepts without running through a full definition every time.
Obviously you can always improve communication, and from my experience most mathematicians try pretty hard to do that, especially in the classroom, but it's not easy.
This is symptomatic of IMHO the biggest single problem with the world of mathematics today. This discipline should be about developing ideas based on rigorous foundations and logic, which is a useful and important purpose. Once you have understood those ideas, even "advanced" results often seem quite simple and intuitive, and we can take that understanding and apply it to other work if helps us to make useful progress. However, the amount of needlessly obscure terminology, poor notation and just plain bad writing in formal papers make the whole field absurdly inaccessible, even to many who might have no trouble understanding the underlying concepts and important results.
Just imagine what would happen if we tried to write engineering specs or software the way postgraduate mathematics research is done. We'd be trying to debug source code where every identifier was a single character, taken from one of three or four different alphabets, with several of them looking similar enough to mistake one for another, with a bunch of combining accent modifiers on top that were used to fundamentally change the semantics of the program, interspersed with comments by someone who needs a remedial class in basic writing skills, full of technical terminology that is defined in terms of other technical terminology from other software packages, except that sometimes the meaning is subtly different in this context but that isn't noted anywhere on the screen at the time, resulting in needing to spend half an hour doing a depth-first-search of all the definitions only to find that a function whose only identifier is the name of the developer who wrote the second version (because the first person to work on it already has another function bearing their name) is actually equivalent to what a programmer would write as
const DAYS_IN_WEEK := 7
Most of us build on top of a mountain of existing work; sometimes just setting a pebble on top and we have a new thing. But no human alive has seen, could remember, or could understand, each and every behavior down in that mountain, from pixels on the screen to pulses on the phy all the way to gates inside their processors.
So you read about the APIs you need; you set your pebble down with a little glue or duct tape; you hope you've connected it correctly; you hope the rest of the mountain works how you want; you hope you didn't miss any use cases for interfacing to the mountain; and you hope it doesn't have any undocumented surprises. Abelson and Sussman called this engineering by poking (1).
Of course, if you want to make something work fast with minimal work, this is great, it's the component future we dreamed about. But it also means we are clueless.
Find a grad student who are just about done with their studies in, say, analysis and the odds are they understand jacksquat about anything in current research in Number Theory either.
This post is in support of your point :)
If all papers actually were written to undergraduate level, many would be 500 pages long. Who has time to write a book-length exposition of the basics of their field every time they publish? I mean, have you ever published a technical work? If so, did you go out of your way to make it completely accessible to high schoolers? If you didn’t, you get why professors don’t do the same for undergrads.
I think mathematics is in a different class than physics or other scientific endeavors because there is meaningful mathematics going back to ancient Greece whereas I am not aware of any physics that was known thousands of years ago that is still relevant.
So does it mean we should curb some superhuman thinking and let them explain better to other human being? I'd think so. It is all about balance or trade-off, but at some point, a field would become too esoteric for a society and it might need to slow down the short-term progress to ensure consistent long-term progress.
It seems the math field as a whole (or more specifically the number theorists?) needs more of good communicators who can link the cutting edge and more general public. The math academia could give incentives to such activities in addition to pushing the boundaries.
The problem is that things genuinely are hard, and there's only so much compression you can do before you're reduced to meaningless handwaving. Textbooks are already compressed by a factor of 100 relative to the original research literature, sometimes even oversimplified. Demanding something a factor of 100 better than a textbook is going to get you something like one of those useless "learn C++ in 24 hours" paperbacks.
Papers are not rewarded for being easy to understand. People/organizations get rewarded for publishing "complicate" papers. That misalignment is the issue.
For state-of-the-art research in obscure specialisms, of course you're right.
However, numerous papers and books cover more mundane subjects and there is no good reason they could not be readily accessible to anyone with an undergraduate-level background or at least a masters delving a bit deeper into the field of interest. Often the problem isn't dumbing down the material, it's simply poor communication skills.
Put another way, while our cumulative understanding of mathematics now covers a very broad range of areas, that doesn't necessarily mean the depth to reach a good understanding of any particular specialism has increased at the same rate. On the contrary, given that even the most specialised of theories must be something that an individual can come to understand and build upon within a single career, there is an inherent limit to how deep our understanding of the subject can go. That limit fundamentally depends on how efficiently we can build the layers of more elementary understanding on which the start of the art must rest.
I get that I don't have a post doctorate in super math or whatever the bleeding edge academic ivory tower game is these days, but at the same time I feel that if 90% of the research cannot convey any practical description of real-world implications, or at least how it relates to other abstract things that DO have real-world implications, maybe there is some BS house-of-cards fantasy game going on.
From what I understand, there are some financial benefits to cranking out research papers. Perhaps researchers are incentivized to build layer upon layer of abstraction so they can spin any tale they desire in order to turn a quick buck? What better field than math? The purest and most non-tangible of fields.
Another perspective I have on this - Virtually all computer technology or software papers I read are typically understandable to some degree, despite my not having the highest credentials in my field. It would seem the constraint of "if this is in a compsci paper, it needs some reference implementation pseudocode" seems to help keep people honest. If the computer can run it, at least you can view things in a more concrete (albeit still somewhat abstract) manner. Especially imperative code examples. These quickly take very abstract algorithmic concepts and transform them into a step-by-step understanding. If the paper is total bunk, you can usually tell pretty quickly from the results on the computer.
Automated proofs, where feasible, seem like a very reasonable requirement for published math papers. My ignorance at the higher levels of this field fails to inform me if all math papers could be proven automatically, or if there is some more 'complex' realm of reasoning unsuitable for classical/quantum/etc verification. Perhaps this should be a good constraint regardless - If you can't implement your algorithm/proof on a computer, where are you headed with it anyways? Surely piling another 500 papers worth of layers on top isn't going to move you any closer to a practical outcome.
I actually find Coq rather painful to use. It gives me the distinct impression that it has been designed/implemented by somebody who has never written human-computer interfaces before.
When it comes to adapting humans to Mathematics, or adapting Mathematics to humans - I prefer the latter.
That s 100% clickbait. People knows the demonstration and the demonstration helped to create entire new fields. By the way there are new and shorter proof of the theorem.
Read the comments too.
>> if one needs a program to check all the proofs, who’s gonna check that
program? Another program-checking program? And a
program-checking-program-checking program, etc.?
To which Buzzard's reply is that, well, computer scientists have done a
thorough job proving the correctness of proof assistants:
>> To check that Lean has correctly verified a proof, we don’t need to check that
Lean has no bugs, we just need to check that it did its job correctly on that
one occasion, and this is what the typecheckers can do. This is a technical
issue and you’d be better off talking to a computer scientist about this, but
they have thought about this extremely carefully.
Reading this as a computer scientist whose field of study is buiilt on
automated theorm proving and logic programming (not what proof assistants do,
exactly, but close, and some proof assistants even use resolution theorem
proving, I understand) this is placing way too much trust on computer
scientists. Every bit of theory on automated theorem proving that has ever
been published is like maths: theorems are proved by hand, on paper, by
humans. And some proofs can be quite convoluted. There's nothing approaching
the complexity of the mathematical proofs that make Kevin Buzzard worry, but
still, the proofs are not trivial and there is plenty of scope for error.
So I'm afraid that even with computer-aided proofs, we 're still building
castles on sand.
Which is quite shocking if you think about it. We think that, maybe we can't
trust our minds to know anything with any certainty, but we can trust
computers to be flawless in their computations. But how do we know that with
any certainty, if we can't know anything with any certainty?
I do agree with Buzzard that it's hard to be sure. I've definitely read papers where a critical argument isn't well written or what is written seems wrong. However, if there are low-level errors, I suspect that with some work things could be patched up.
And finally from the beginning of math people misunderstand their on theorems, doesn t mean that there students won t do better.
If someone said, "nobody dead or alive understands all the details of Windows" I doubt anyone here would find it controversial. It's too big to fit in one person's head.
My personal impression is that more formal verification is necessary in math, and it should be tought to every student of Mathematics in introductionary courses because it gives an unforgiving but also rewarding learning experience with formal proofs.
Feel free to ask questions.
However, Isabelle/HOL/Isar allows proofs to be written in a very human readable way, so the "documentation" I use is the existing library and how things are done there. There is a convenient Ctrl+left click action on terms to jump to their definition. That is very helpful to navigate unknown libraries. In comparison, I couldn't read and make sense the proofs I've seen in Coq.
There is a learning curve with Isabelle/HOL/Isar. It can be extremely frustrating to not know how to express "simple" arguments or definition. But for someone familiar with manual "formal" proofs in mathematics, most can be learned in some hours under guidance, like in a seminar on a weekend with a couple of hours on each day.
I'm learning rather autodidactly by trial and error. I think there would be value in a series of blog posts about "This is how you define a group" or "This is how you reason in nonclassical logic" or "This is how to proof the fundamental theorem of Algebra" all applied to Isabelle/HOL/Isar. There are often multiple ways to define things, and to be honest, I don't know the specific advantages and disadvantages. For example, I think you can introduce rings as type classes or as locales or using axiomatizations, and possibly more. This is one example of stuff I don't know how to manage in the system, but there is already a lot of structure defined, and I can reuse that.
Welcome to our world!
Actually working through some of [the details], when formalising them, made me understand that some of these issues were far bigger and more complex than I had realised.
In the perfectoid project there is still an issue regarding the topology on an adic space, which I had assumed would be all over within about a page or two of pdf, but whose actual proof goes on for much longer
Yeah, that's how software developers feel, all day, every day. That's why we all moan about being asked to give estimates for things. You don't really understand a thing until you've formalised it in code, and often things that look simple and easy turn out to be unexpectedly deep.
Mathematics is a field that's been doing the equivalent of writing pseudo-code for thousands of years. There are endless reams of pseudo-code that looks like it might work, but as any programmer who's ever written pseudo-code knows, whether it actually runs when you try it out for real is anyone's guess.
So now mathematics is starting to experience the pain of multiple incompatible programming languages/ecosystems. There's Lean, Coq, Isabelle/HOL and some new JetBrains language too. Soon they'll start to discover the value of 'foreign language interop', rigorous code review, comments, style guides, design documents, package management, automatic optimisation etc.
Have you ever looked at a proof written in one of these languages? All code in these languages is a hot mess judged by the standards of professional software developers: tons of abbreviations, single letter variables, uncommented, ad-hoc module structure etc. I'm sure we'll eventually see mathematicians having raging flamewars about what clean style is and tabs vs spaces.
Luckily for mathematicians though they work at their own pace and are under no deadline pressure. They'll figure it out.
I think that programming/CS is to mathematics what neolithic revolution was to human culture. It allows for specialization in the stack. People can obsessively optimize different parts of the stack to everyones benefit.
This is Good Old-Fashioned AI, through and through.
Here is an example.
Proving stuff in these systems is still harder than natural language proofs, so making them easier could increase adoption, which would make mathematicians' results more robust/rigorous.
On the other hand, we do need to build a standard library of results that can be used to prove those newer results. You can't do much in analysis without basic results on metric spaces, for instance.
Having a database of all mathematical proofs where all of them are checked for validity, however, is a very useful tool to actually find those edge cases where a proof is wrong and give future mathematicians a solid foundation.
Using AI as an extension of human capabilities is a given to me. Like with heavy machinery, we don't want to do the heavy lifting. We want to steer it properly to gain most benefit.
Yes and no. For some basic conjectures this might be the case, but one of the greatest utilities of certain theorems is how they can be applied to completely new fields. If a theorem used in this manner is not actually true, then the proof is invalid -- and it might be the case that (several proofs down the line) the conjecture you've ended up proving is almost entirely invalid.
There's also the slippery slope of "Principle of Explosion"-type problems with accepting "mostly right" proofs -- it means you will be able to prove any given statement (even ones which are outright false).
the more important thing is that the number of humans aware of the existence of this tech and the more actually use it the better society will be off in the long run
the following may seem totally impossible today but seems a lot less impossible to people who have been thinking about this for a long time: once sufficient mathematics has been translated, people can start adding physics postulates, one most physics has been inserted we can start formally verifying (up to validity of postulates on top of axioms) not just proper operation of digital circuits etc but also power plants and so on, and once interacting with proof systems becomes as widespread as alphabetic literacy, people might one day be able to formalize and vote on their requirements of society and policy, see predicted perverse incentives, or decide that policy can only be enacted when facets of their consequences can be proven. We might one day not vote on promises of strategy, also not vote on strategy, but rather on outcomes!
I know it sounds absurd.
> "I think there is a non-zero chance that some of our great castles are built on sand," Buzzard wrote in a slide presentation. "But I think it's small."
But he does note:
> This overreliance on the elders leads to a brittleness in the understanding of truth. A proof of Fermat's Last Theorem, proposed in 1637 and once considered by the Guinness Book of World Records to be the world's "most difficult math problem," was published in the 1990s. Buzzard posits that no one actually completely understands it, or knows whether it's true.
"I’m suddenly concerned that all of published math is wrong because mathematicians are not checking the details, and I’ve seen them wrong before,”
> “I’m suddenly concerned that all of published math is wrong because mathematicians are not checking the details, and I’ve seen them wrong before,” Buzzard told Motherboard while he was attending the 10th Interactive Theorem Proving conference in Portland, Oregon, where he gave the opening talk.
Most of the fundamental theorems in each field are understandable by humans. As you get more and more abstract, and get into more and more obscure areas, you start to see non-understandable proofs built on top of non-understandable proofs. There's a danger that those are wrong, but when they collapse it just means that one professor has wasted their life, not that all of mathematics will suddenly collapse.
We're working on the infrastructure to make such a thing easier, but the project does not currently have a leader: https://github.com/treenotation/jtree/issues/83
When you combine this with the fact that these ML programs are not proofs but proof scripts, that perform a lot of "unnecessary" work like searching for a proof rather than just going straight for the answer, it suddenly begins to make sense why these systems take on the order of hours to days to check their whole libraries.
Coq and Lean are somewhere in the middle, because they have proof terms, but the logic itself still requires some unbounded computations. Checking a proof here is often fast, unless you make too much use of computation in the logic. But people often don't care about proof terms, and still store the proof scripts, which are as slow as ever.
Metamath is in this setting somewhat unique in eschewing proof scripts altogether, or more accurately, inlining proof scripts immediately on the author's machine. The resulting proofs are often comparatively long and verbose, but I would argue this is only a display matter, since all the other provers are doing the same thing, they just aren't showing it.
First, jtree is a bit orthogonal as that’s just an implementation. The github issue is there for convenience, it’s arbitrary.
The reason for tree notation is that it’s a level up from binary. Binary notation does not give you the ability to define new symbols. Tree notation is a 2d binary. You can define new abstractions. It is a bridge between human and machine languages.
Take any concept in mathematics, such as “derivative”. How would you define such a concept, starting only from 1s and 0s? Tree notation gives you a method to do that, in a way that’s not only efficient (noiseless), but also gives you practical tools, like complexity counting.