Hacker News new | comments | show | ask | jobs | submit login

TLDR: The author independently re-discovered what you may know as Old Code Syndrome.

I think that's because mathematical papers place too much value on terseness and abstraction over exposition and intuition.

This guy's basically in the position of a fairly new developer who's just been asked to do non-trivial update of his own code for the first time. All those clever one-liners he put into his code made him feel smart and got the job done at the time. But he's now beginning to realize that if he keeps doing that, he's going to be cursed by his future self when he pulls up the code a few months later (never mind five years!) and has zero memory of how it actually works.

I'm not intending to disparage the author; I've been there, and if you've been a software developer for a while you've likely been there too.

Any decent programmer with enough experience will tell you the fix is to add some comments (more expository text than "it is obvious that..." or "the reader will quickly see..."), unit tests (concrete examples of abstract concepts), give variables and procedures descriptive names (The Wave Decomposition Lemma instead of Lemma 4.16), etc.




It would be really nice if all it took to understand difficult mathematics were some easy programming tricks.

The problem with looking at old code is you forget what is going on or what the purpose of different components are. The problem with looking at old mathematics is that it is genuinely very difficult to understand. You work very hard to be an expert in a field and get to a level where you can read a cutting-edge research paper. Then if you let that knowledge atrophy, you won't be able to understand it without a lot of re-learning when you look at it again.

Unfortunately cute tricks like comments and concrete examples won't save you here (if concrete examples even exist -- oftentimes their constructions are so convoluted the abstract theorem is far easier to understand, and often times they are misleading. The theorem covers all cases, but all of the easily understandable concrete examples are completely trivial and don't require the theorem at all.)

Programming has existed for, say, 50-100 years. We have recorded mathematical history going back thousands, with contributions from most of the most brilliant human beings to ever exist. Do you think perhaps there's a reason why a simple and easy trick like commenting and renaming lemmas has been discovered and solidified as standard practice in programming, but hasn't been adopted in mathematics? Are mathematicians really just SO dumb?

The answer is those tricks just aren't good enough. Mathematicians do exposition. They do plenty of explaining. Any textbook and even many research papers spend a huge amount of time explaining what is going on as clearly as is possible. As it turns out the explanation helps, but the material is just plain hard.


> Programming has existed for, say, 50-100 years. We have recorded mathematical history going back thousands, with contributions from most of the most brilliant human beings to ever exist.

Mathematics with a solid logic foundation has also existed only for the past century, and writing programs is actually equivalent to doing a mathematical proof as some have already pointed out.

The actual problem is that when programming you are talking to computers, so you have to lay out each step, or the computer will not know what to do. Sure you can use libraries to do very complex things all at once, but then that's because those libraries have already laid out the steps.

But when doing mathematics you are talking to humans and you often skip steps in a very liberal way. There is indeed a library of theorems that is written down, but there is also an unwritten library of "it trivially follows", which, if a mathematician is asked to actually write it down, might feel humiliated.

When you think that there is something so trivial that computers must be able to do, you still have to find or invent a library for that. When you think there is something in mathematics so trivial that it must be true, you just need to convince your audience.

One day all mathematical papers will come with a formal proof, but that day has not yet arrived.


To clarify the last sentence: I am not criticizing mathematicians for not doing formal proofs. Often mathematical publications are not final "products" but explorations into new methods. Requiring formal proofs on every publication, with the status quo of computer-assisted proving, will surely impeded the development of mathematics. What I am saying is that I hope one day with the development of computer-assisted proving, the chores involved in doing formal proofs will be reduced to such a degree that mathematicians are more inclined to do them than not.


Mathematical papers will not come with a formal proof until formal proof systems "auto-prove" button is strong enough to do everything for them and it can come as an afterthought that takes 5 minutes.

Formal proofs do nothing to help understand the mathematics, all they do is help to check for errors in logic -- have you ever read a formal proof? They are leagues more obtuse and difficult to understand than any piece of mathematics from any field.

A program is technically truth-value equivalent to doing a proof, but the resulting document/piece of text are MILES apart. The proof is an explanation in plain language to others who understand the topic why something is true. The formal proof is a sequence of opaque inference rules that the computer promises is equivalent to a real proof.

It's the difference between understanding a formula and plugging it into your calculator. The calculator is more likely to compute correctly, but it gives you no help when it comes to actually understanding mathematics.

Mathematicians as a community completely reject the notion of using formal proofs (while sometimes begrudgingly accepting they may be useful to check our work) because they completely miss the point of what we are actually doing.


> writing programs is actually equivalent to doing a mathematical proof

Only in the sense that writing in brainfuck is equivalent to writing in Python...


True, but the point is that in principle you can write very detailed, fully formal proofs, in exactly the same manner programs are written. The rest of my comment then discusses why it is not written that way.

Doing formal proofs manually is like writing turing machine instructions. Computer-assisted proving gives you higher level tools, but apparently it is still not powerful enough to be accepted in mainstream. And unlike programming where you get stuck if you haven't invented higher level languages, mathematicians have the widely accepted tool of "it trivially follows". Bourbaki might disagree, though.


That's a very illuminating analogy.

Users of high-level languages, typically do not think about what the compiler is doing, and if in fact they do not know the compiler action--often the only way to find out is by reading the compilers source code.

This is akin to looking a math paper seeing "it trivially follows..." and your only recourse to find out why it so trivially follows is to get a mathematics degree.


There is still a key difference.

With programming the information is always there. If you don't understand a higher level language but do understand the language in which the compiler is implemented, you can always read the compiler.

The "it trivially follows", the information is simply missing. Gaining a mathematics degree will often give you the intellectual power of finding the missing pieces yourself, but the only sure way is to ask the authors themselves.


At least in programming it's possible to step into a function and/or work at multiple levels of abstraction.

Getting a math degree just to be able to fill in the gaps created where 'it trivially follows' is equivalent to being required to memorize the API for a framework because clearly defined documentation is hidden behind a paywall.

so called 'intellectual power' has little to do with it. What you're describing is familiarity with navigating a minefield of poorly structured information and tapping a academia (ie at a very high cost of entry) for 'special access' to resources that bridge the gaps.

Of software development suffered from the same informational constraints and lack of innovation, we'd still be playing pong.


Well said. Memorizing the API is fundamental to success in most branches of math - but this erects a barrier that a lot of able individuals are unable to cross. I've always thought this was a problem but I almost never hear of anybody complaining about this so I figured it was just me. Languishing in the established system is usually interpreted as "sour grapes".


It strikes me as a collective means for mathematicians to reduce entry into their field and thus to increase their own salaries. Maybe you don't have to get a "math license" to practice math, but you have to somehow acquire a huge body of knowledge that is almost never spelled out fully in the mathematics literature. This will prevent many people from becoming mathematicians and thus make math a more exclusive and lucrative endeavor.

To be cynical about it.

I guess that's not much different from most fields' use of specialized vocabulary, etc. There are many informal methods of dissuading people from competing in your own little corner of the labor market.


> ... because clearly defined documentation is hidden behind a paywall.

Hmm you are from US aren't you? :)


>> writing programs is actually equivalent to doing a mathematical proof

> Only in the sense that writing in brainfuck is equivalent to writing in Python...

Actually, in the very concrete sense that a program is a proof for the forumala that is equivalent to its type[0].

[0] https://en.wikipedia.org/wiki/Curry–Howard_correspondence


> Do you think perhaps there's a reason why a simple and easy trick like commenting and renaming lemmas has been discovered and solidified as standard practice in programming, but hasn't been adopted in mathematics? Are mathematicians really just SO dumb?

No, they aren't, but the example in question would have been easier to understand if he had tried to explain what was going on, instead of just saying "it is trivial that this or that follows". I think the comparison with "old code syndrome" is pretty much spot on, to be honest.


... especially if code is to be considered in the terms of information theory. Context is important and context can't be encoded without raising the entropy. Also, speech and to a large degree mathematical symbolism are sequential, so multi dimensional problems would have to be broken down into one sequential dimension, raising the entropy exponentially or whatever or loosing information. Can't help it.


Leslie Lamport wrote a paper on this subject over 20 years ago.

http://research.microsoft.com/en-us/um/people/lamport/pubs/l...

As far as I can tell, nobody has really adopted any of his proposals yet, and most papers continue to be written in pseudo-prose. Remember, mathematics used to be written in full prose (e.g. "the square of the hypotenuse is equal to the sum of the squares of the other two sides") and it took hundreds of years for us to realise that "x^2 + y^2 = z^2" was a better notation.

While mathematics may be "genuinely very difficult to understand", so is any complex piece of software. Software engineering techniques might be useful to mathematicians, even if they are all new-fangled and modern.


The structure of Doron Zeilberger's proof of the Alternating Sign Matrix Conjecture is reminiscent of this, though possibly not directly inspired by it: http://www.combinatorics.org/ojs/index.php/eljc/article/view...


"subsubsubsublemma" kind of cracked me up. He seems very dedicated to not numbering the nested schema in this proof. :-)


It's also the jargon.

I remember years ago telling a coworker that I was an ACM member and subscribed to the SIGPLAN proceedings. He looked at me and with all sincerity asked, "You can understand those things??"

To which I responded, "About half," but I totally sympathized with his question. Both Math and CS need the reincarnation of Richard Feynman to come and shake things up a bit. There's too much of the 'dazzle them with bullshit' going on. It's no wonder that it takes so long for basic research to see application in real scenarios. You people bury your research under layers of obfuscation about half the time. Does it really help anybody to do that? Why do you do that?

"If you can't explain it simply, you don't understand it well enough." is my new favorite Albert Einstein quote.


Thing is, I did understand it. Hell, I looked through my notes from maths degree (5 years ago) and guess what - most of it seems like nonsense. The worst bit is that because these were notes to myself - jam packed with comments of "so obviously" followed by a transformation I can make no sense of at all.

It makes me pretty sad to think what a waste of time that learning was. Also the flip side of "hehe - I was well smart" is "shit - I'm now a moron"


>"If you can't explain it simply, you don't understand it well enough." is my new favorite Albert Einstein quote.

Yes and no.

I spend a fair amount of time explaining things to children. Not exactly five year olds, so no ELI5. More like ELI13. But to do this often requires over simplifying points to you either hand wave or or even sometimes given incorrect examples that are 'good enough' at the level that you are aiming at.

For example, consider explaining gravity as mass attracting mass. That is over simplified and breaks down at certain points, but for explaining to a kid why objects fall when you drop them, and even giving an opening to explain things like acceleration of falling objects, it is good enough.

So a better way of saying it is that if you understand both the subject matter and your audience well enough, you will be able to given simple explanations that will increase the audience's understanding.


> "If you can't explain it simply, you don't understand it well enough." is my new favorite Albert Einstein quote.

It’s also one of his more moronic quotes. Certainly on a very abstract, dumbed-down level everything can be explained simply; sure, if you cannot give your parents a rough idea what you’re doing, you might want to look into more examples. But there are plenty of things which require a very extensive basis to be understood thoroughly.

For example, it is very easy to summarise what a (mathematical) group is and for anyone with a basic understanding of abstract maths, it will be understandable. It’s also very simple to find some examples (integers with addition, ℝ/{0} with multiplication etc.) which might be understandable by laypeople, but you will either confuse the latter or only give examples and not the actually important content.

Further, when you have “simply explained” what a group is, can you go on and equally “simply explain“ what a the fundamental representation is and how irreducible representations come about? You just need a certain level of knowledge (e.g., linear algebra) already and not every paper can include a full introduction into representation theory.


"You just need a certain level of knowledge (e.g., linear algebra) already and not every paper can include a full introduction into representation theory."

Then why is paper still the overwhelmingly preferred medium?

Using hypertext it would be trivial to link to an external source describing the specific concept used from linear algebra.

Not providing supporting links is only good for an audience that holds the entirety of mathematical knowledge in their heads (ie mathematicians in academia).

The rest of the world, incl those who have since moved on like the author, don't fit into that category.

Therefore, the work can only be accurately read and understood by the tiny minority of specialists capable of decoding the intent of the work.

Limited reach = limited value to society.

Is the intent of a PHD really to advance the field of mathematics? Or is it just another 'measuring stick' for individuals to prove to others how 'smart' they are?


I have taken to adding the corollary response ", but if you can only explain it simply, you also don't understand it well enough."


Einstein also said "make things simple and not simpler". They will always be things that are irreducible.


But at the same time, if you can ONLY explain it simply, you don't understand it well enough.


While "easy programming tricks" are not all it takes to understand difficult mathematics, writing a math paper like good code makes it drastically easier to understand what's going on.

E.g., encapsulation - given a theorem with a highly technical proof, but ideas from the proof are not super important, hide the proof in an appendix. This is directly analogous to private functions which are only used to power a public one.

Naming lemmas is also helpful, although not sufficient.

Mathematicians aren't dumb. But they do have a set of traditions and a rhetorical style which is often blindly copied. Also, most mathematicians haven't been exposed to good software engineering practices - consider how few use git and instead just email around "paper_v3_chris_edits.tex". Having worked in both the math world and the CS world, math folks can learn quite a bit from software.

(And vice versa - e.g., I just wrote a blog post about how some abstract math solved a real software engineering problem for me: http://engineering.wingify.com/posts/Free-objects/ )


A paper cannot teach a layman everything they need to understand the topic, but there are many papers out there I have difficulty understanding because of how they are written, but when reading them with a companion exposition, I can comprehend. It is a balance of getting one's point across while targeting a wide enough audience.

It is the same with code. No one should be writing production code at the level where any non-programmer fluent in the language of the comment's could understand. But they should be writing it simplified enough so that performance isn't impacted and that anyone who is maintaining the code should be able to understand it without having to spend extreme amounts of time digesting it. And sometimes a key performance boost will turn into a 'here be dragons'.

I do think Mathematicians are optimizing a bit too strongly for similar level peers, but please don't think I'm trying to say the are dumb for doing such.


since when are standard tools of good education 'cute little tricks'? mathematics is hard. mathematicians are some of the smartest people in the world. yet the current tendency to not do these things or not do them enough is a consequence of a certain culture, efficiency constraints and even the near universal use of LaTeX.


I completely agree on the culture point:

Programming and math _is_ essentially the same (Curry Howard isomorphism) and the problems are indeed equally complex.

The difference is that programming is driven on economical terms, hence agility, flexibility, etc. has been developed.

Mathematics is driven in the university sphere, where mostly intrinsic motivation drives. Not many mathematics professors have the urge to sit down and learn scrum, read manifests of coding practice etc.

The math culture will eventually see itself be forced to go these ways to keep up.


I don't think that programming and math are the same in a practical sense. I am studying math and CS and they're fairly different. Programs deal with specific things, types and data that you manipulate and see with your eyes. Maths deal with abstract concepts for which finding examples can be pretty difficult.

Also, while programming, you can design your functions and their interfaces before writing them down. In maths, that's impossible: you're constrained by what can/cannot be done.

Scrum or coding practice doesn't apply here. Maths don't have to "keep up", they're already way ahead of their applications. It also doesn't need to be fast or flexible, just rigorous. And while motivation and intuition helps (specially when learning) some concepts cannot be motivated or given an intuition, and even in that case you still have to learn the small details and formalizations.


> Programs deal with specific things, types and data that you manipulate and see with your eyes. Maths deal with abstract concepts for which finding examples can be pretty difficult.

Have you forgotten the pains that you went through trying to understand the difference between values, pointers and references, lexical and dynamic scoping, static and dynamic typing, and the like? Can you see them with your eyes? Have you ever tried fully explaining any of those to a non-CS major in half an hour? :)

Abstractness is pretty subjective. For non-programmers, even the idea of CPU and memory can be abstract. And it doesn't help to open up a computer and point to the hardware; that is like claiming a mathematical paper is not abstract by pointing to the very concrete paper and ink that embodies it.


I'm not saying those concepts are not difficult. But values, pointers and references are something that relate to memory. You can simulate in a paper how is a value or a pointer managed in a program. Same with scopes and static typing. There are specific examples for all of those.

Some concepts in mathematics are way above the abstraction level that you mention. For example, the projective plane, or nowhere-differentiable functions, or geometry in higher dimensions; those are concepts that not only do not have any physical equivalent, but are also very difficult to grasp and imagine in your head.


True. But still, I think the problem lies more in the fact that mathematicians skip too many steps in their prrofs (see another comment of mine) than the inherent abstractness of mathematics.


I've done mathematical research in the past and I actually think a lot of the practical methodology I've learned from software development could be incredibly useful to mathematicians. I'd love to work full-time on a selection of related research problems with a group of coworkers following a sort of "agile" process with quick morning standups, a centralized repository of works and proofs in progress, "proof reviews", Trello boards, and so on.

Would it actually work? I honestly don't know, but I'd seriously love to try it.


Programming is a subset of math, namely, the part of math that deals with algorithms. There are many parts of math that are not contained in programming. For example, floating point numbers are finite representations of real numbers, but since floats are finite, without some math theory beyond just algorithms, there is no way to understand them. There are may similar examples.


The differences you note are what I consider cultural differences.

Math _indeed_ have the idea about interfaces and implementations. They are called something different (exitensisal types from a PL point of view).

I will recommend you to look into type theory, in particular the book "types and programming languages" provides a very good introduction.

In CS we have two areas that make it completely clear that math and CS are coupled, that is complexity theory and programming languages. After having taken a course in each, and pondering a bit, it should indeed be possible to see the that they are the same.


I'm curious, are you saying LaTeX is bad for math?

I'm not doubting you, I'm no mathematician, but my understanding is that it is only positive, as it improves the legibility of the text.


> I think that's because mathematical papers place too much value on terseness and abstraction over exposition and intuition.

If it only happened on mathematical papers... It is all over the place, in books, classes, etc. Few professors tell you what are they doing and why; they just throw formulas, theorems and symbols at you. Intuition is not given its necessary importance, first because it's hard to grasp some concepts, and second because sometimes there's no easy intution behind them unless you have deeper knowledge in the subject.

Regarding "unit tests", or descriptive names... That's very difficult once you get into advanced math. Examples are probably going to be trivial or too contrived to be useful. And if you have to give a name to a theorem or lemma, you'll end up describing its full contents.


In Germany you get to your advanced courses for the last two years of High school. I picked Math and Physics.

We did a bunch of linear regression(with maple?) i believe. And some manual differential equations. I was one of the best in that class, but when I asked what we need this stuff for the other people in the class looked at me and said the following:

"if you ask this question you're in the wrong class"

I went on to study engineering, and i guarantee you the other two still don't know what that stuff is good for. They learned the formulas by heart and then went on with their lives.

I mean it was fun for me, it was basically a coding exercise. And I was happy because I was faster than everyone else, but I didn't get why we were doing it.

The same actually bothered my about the studies. Just one example: folding is a fairly easy concept. But for some reason they first want to drill it into your head, you learn a bunch of techniques and then if you're lucky and you stick with it, eventually it clicks and you'll realize why in a completely unrelated class.

Why can't we just provide a simple real life example first and then go on explaining the details?

Aren't things easier to grasp when you can have a real understanding/connection to it? Isn't that precisely why people that learn coding at home tend to be better than those that just studied it, because they were told it's a solid profession to study?

I think with a good example and visual representation you can probably teach most of the stuff that's taught in Uni to young kids. But then you would be forced to admit that you wasted a lot of time during your own life, who'd want to admit such a thing.


> Why can't we just provide a simple real life example first and then go on explaining the details?

That's a very good approach, which I try to follow in all my writings. Basically, I imagine a reader that is generally uninterested in the material, so the first thing to do is to "pitch" the mathematical concept using a simple example, or just say why the concept is useful. In the remainder of the lesson, I assume the reader might lose interest and stop reading at any point, so this is why I put the most valuable content first (definitions and formulas), followed by explanations, and finally a general discussion about how the material relates to other concepts.


> I was one of the best in that class, but when I asked what we need this stuff for the other people in the class looked at me and said the following: "if you ask this question you're in the wrong class"

I am always so disappointed when I hear people express this. Being able to place an effort in a broader context is so helpful in being able to approach the work well. A few teachers in the Literature department in high school had the same attitude and it was incredibly demoralizing and left me kinda directionless in their classes. I wish things like https://youtube.com/watch?v=suf0Jdt2Hpo had existed back then to give me some idea of what useful and interesting literary analysis looked like and could do.


I think there are teachers for whom rote repetition is teaching.

I used to know a math lecturer, and his attitude was very much that it was his job to throw proofs at his students, and the bright ones would put the rest together for themselves.

He wasn't even remotely interested in the less bright ones, and certainly not in presenting the material in a way that made it easier for them to follow.

Digital has real potential here, because you can build animations and virtual math labs to explore concepts and give them a context, and suddenly math becomes practical and not just an excuse for wrangling abstract symbols for the sake of it.


> Why can't we just provide a simple real life example first and then go on explaining the details? This. People learn differently, in my case, if I can't get the 'Why' first, I'm not that excited to learn it. I guess making 'simple' real life examples in many cases is hard. I also tend to learn things much better if they came from a real problem/need I have. There was a good discussion about a 'project based university' here: https://news.ycombinator.com/item?id=10989341


Yeah, teaching the basics of an abstract concept without first explaining how it fits into the bigger picture is IMO not the best way to motivate some people. I'm also a person who wants to understand why it's important instead of just trusting someone that it'll be useful "later".

It'd be cool if, once you start your major, there was basically an overview class explaining why each of your courses is important and what concepts (at a high level) you should be grokking with each course and semester; basically some context to frame your learnings.


I took an automata class where the professor talked into the chalkboard and refused to explain why we were required to learn any of the material. It wasn't until later in the compilers course that we had a teacher who actually took the time to explain how all that mysterious theory actually had a place in the real world. So many light bulbs went off in my head during that class.


I had a similar experience from a different perspective -- I took a course on Theory of Computing, which was 50/50 gate-level CPU design and mathematical work on computability, automata and so on. I found this interesting as an intellectual exercise and to get a grasp on what is computable and what isn't, but then the next semester it proved super useful in the compiler design course.


I am also much more happy with an answer along the lines of "it has no practical use currently but is interesting because of...." than no answer at all.


I actually think you're in the norm. The "why" helps create a belief; a belief is something that ignites action. Without it, someone's belief as to why they should learn [X] is too often defined as "to get a good grade."


High school math was awful at this in the US. I went through a year of calculus and never knew what the heck it was for.


When I asked in high school what math was for to some professors, they said it was more to 'improve your thinking' than to use it in 'real life'. Now, a math course designed to be progressively easy to grasp, personalized when needed, entertaining and showing lots of examples from real life (e.g. relating algebra with 3d and video games) requires a very talented educator; teaching well is really hard and usually there's not enough time or resources to do it. That's why I think one of the most important skills to develop is to learn how to learn.


Yup. I don't recall actually learning anything in math class from 7th grade through my senior year.

I learned trig and geometry from my shop and programming courses... Trying to do graphics in QBasic and determine lengths and angles in carpentry gives concrete examples. It's not as though most math sprang spontaneously from pure thought-stuff - at some point, architects, inventors, astronomers and others in concrete endeavors discovered these rules.


I think it very much depends on the teacher.

It's not difficult to explain the first few topics on calculus in terms of distance, speed and acceleration. Other examples I remember were washing lines (a catenary), the path traced by a stream train's wheels and rocketry.

My teacher in England always had a real world example, but students with the other teacher didn't.


> Why can't we just provide a simple real life example first and then go on explaining the details?

But sometimes there is no real life example. Fundamentally, math is abstract. Yes, mercifully, its models often have analogues in nature, making its purpose utilitarian and intuitive. But sometimes no analogue exists. As in much of modern physics, in math, often you have only abstraction.

I think that's why math is difficult to learn. Without compelling illustrations based in the physical world the student must follow the concepts and proofs using the rigor of math's legal transformations, fortified only by the faith that these formalisms will sustain truth. But too often the practitioner must remain oblivious to the utility and implications of both the end and the means.


I even asked at university (statistics course) regarding to some specific test "what do I need this for", the professor looked at me and said "you'll never need this". I packed my things and left (and finished the course about 3 years later with a different professor).


I don't know... I'm taking a theory of computability class right now and I'm sure glad that the pumping lemma for regular languages is called "the pumping lemma for regular languages"


But isn't that an attempt at a "descriptive" name?

The intuition for the person who discovered this lemma, clearly envisioned the underlying process as a device pumping out new strings belonging to the language.


I like to imagine they envisioned the strings themselves being "pumped" up by inserting new characters in the center.


I took a brick and mortar automata class a long time ago and a classmate who clearly wasn't reading the text before lecture, insisted the prof called it the "plumping lemma" because it was about growing wider strings. That might have been the only funny thing that ever happened in automata class, unfortunately.

Education by satire / humor is a sadly under-explored field. "A satirical approach to linear algebra"... not sure if I'd kickstart support that, or turn around and start running. In the eternal spirit of "anything that sounds interesting and impossible is a great startup opportunity" I propose that someone ... etc etc, you know.


My professor has had some quality quotes this quarter:

"Misspell Chomsky's name as Chompsky, and I'll give you a pass; Misspell Turing as Turning or something? I'll flunk you"


> Few professors tell you what are they doing and why; they just throw formulas, theorems and symbols at you.

That's not true. Proofs are a core process of all mathematics. Writing proofs can take up a majority of your time in some classes.


+1 on the final point. It may be difficult to appreciate how terse terse is here.


I think you missed the point of the article, which was that students, parents, standards-setters, educational theorists, and legislators have a distorted idea about what education is for and why certain subjects are taught. We don't teach history so that children can recite the years in which various battles happened. We don't teach algebra in order that everyone in society knows how to factor a simple polynomial.

The author can no longer understand his dissertation, but that doesn't mean he failed, or that the educational system failed by granting him a PhD for the work. Rather, the dissertation was about proving to the system and himself that he had learned how to tackle a complex problem and generate a solution that would pass muster with his academic mentors. In the process he learned many skills indirectly, became a more effective problem-solver in general, and had fun, all of which are far more valuable and far more important than the topic of his dissertation and whether or not he still understands it.


I started taking readability very seriously once I started going back to extend old code and finding I couldn't immediately understand what it was doing.

Now, if I have that problem, it's now two problems. The original problem, and the readability problem. The readability problem is solved first, and the original problem can only be solved afterward.

I don't see comments helping me, I could spend the time better by making better method names, better local variable names, extracting methods, ensuring that lines do not run off of the screen.

Maintaining my old code got significantly easier after I started doing this.


I'm fixing someone else’s code right now and a few single line comments would have saved my client thousands of dollars.

Today I put in a log statement to see why the code deletes data from the database if there is no new temperature data for the time period. The cron job has been running all day and so far every time it attempts to delete the data, there is no data to delete.

Another line runs a different script if the time is 15:00. No idea what is magical about that time. I added a bunch of log statements to see what happens at 15:00 that is different from every other hour of the day. So far I have no clue.

I’m sure the original coder had a reason for inserting these bits of code, but damned if I know what it was.

There are dozens of instances like this in the code. A one line comment would have saved me hours of work and the client several thousand dollars.


I'll go one better: I've got servers in my machine room that I don't know the purpose of. Literally in some cases the way I've found out what they do is shut them off and wait for someone to complain.


I've seen stuff like that before, typically it's intended to be temporary code put in as a way of troubleshooting or achieving a non-standard result but wasn't cleaned up properly. I see that so often that I simply assume it's the case and not even try to run it down any further. Just make it work properly and move on.


Deleting from any empty database could just be a sanity check. If it's logically supposed to be empty at a given time, it's a perfect time to clean up database errors...


> I don't see comments helping me...

I don't know why people say this type of thing, as if there's some choice you have to make between good names and comments. You can have both, and there are absolutely times when comments are necessary. Too many comments may be a "smell", but code that doesn't require any comments at all is very unlikely.


I think a lot of coders jump to comments as a first resort rather than trying to make the code clear. In theory it's not a tradeoff, but in practice I've seen a lot of heavily-commented code with single-letter variable names. My rule of thumb would something like "never write a comment until you've spent at least 5 minutes trying to make the comment unnecessary".


The problem is the bug catch creep growing around DataInput and mathematical operations.

I think it would really help to have a language, that differs between the original methods raw body (displaying the pure intentions) and blends out the various filters and catchy expeption handling.

Such a function with catchy names, could be allmost as good as a coment- and is usually there. Coments can be useless too /* Function taking these arguments, returning this type */


> there are absolutely times when comments are necessary.

I'll have to take your word for it, because I've yet to run into that situation.


Comments are not for how or what, the code does that just fine.

Comments are for why.

Sometimes the why is obvious, then you don't need a comment. The rest of the time, add that comment. Even if it's something like "Steve in accounting asked me to put this in."


Good comments do frequently answer the why question and, sometimes, the non-obvious what.

It's generally true that the code expresses the what, but it's also true that it can take the reader time to discern. A simple comment here and there can be a shortcut to this discernment which, over thousands of lines of code can save serious time.


Such comments are best added in commit messages. If you add JIRA task number to each commit it's almost always already there (because in the task there will be your discussion with Steve).

And when you change the line because Tom asked you to change it again - you don't need to remember to remove the comment about Steve.


Sorry, but that is ridiculous.

I have recently started working on an existing project which is all new to me. Having to go through git commits looking for relevant comments is just a silly suggestion. Its far more useful when I am browsing through the code, trying to understand it, to see a comment beside the relevant piece of code, not hidden away in git commit messages.


No need to apologize for different opinion.

IMHO comments aren't there for newcomers. You are a newcomer for a month or 2. You are a developer for years most often. Besides, how is

    //Steve in accounting asked me to put this in
    foobifyTheBaz(bazPtr, foobificationParams);
more helpful for newcomers than just

    foobifyTheBaz(bazPtr, foobificationParams);
Comments answering "why" are very important when bugfixing/changing stuff. You want to know if sth was intentional or just accidental, and if it was part of the change that you have to override, or some independent change, so you know which behaviour should stay, and which should change. You should always look at the git blame of the region you change anyway before making a change (it often turns out your change was there before and commit message has the reason why you shouldn't change it back).

And you don't look for git commits. You enable git blame annotations in your IDE and hover over the relevant lines to immediately know who, when, and why changed that particular line. That's why commit messages are as important as good naming IMHO.

By the way, when you have code like:

    //Steve in accounting asked me to put this in
    foobifyTheBaz(bazPtr, foobificationParams);
    barifyTheBaz(bazPtr, barificationParams);
    if (mu(bazPtr)) {
        rebazifyTheBaz(bazPtr);
    }
How do you know which lines the comment refers to and if you should delete it or not when Tom asked you to change barifyTheBaz invocation? That's the main advantage of commit messages over comments - they are always fresh and encode the exact lines they refer to.


So what you actually write is

    // We foobify the baz in order to ensure that both
    // sides of the transaction have their grinks froddled.
    // This was a request from Steve in Accounting;
    // see issue #4125 for more details.
so that (1) if you'd otherwise be wondering "wait, what they hell are they doing that for?", you get an answer (and, importantly, some inkling of what would need to have changed for removing the code to be a good idea), (2) there's an indication of where you can find more details (hopefully including the original request from Steve in Accounting), and (3) if the code around here changes, you can still tell that the relevant bit is the foobification of the baz.

I strongly approve of putting relevant info in your VCS commits too, of course. But, e.g., if someone changes the indentation around there then your git blame will show only the most recent change, which isn't the one you want, whereas the comment will hopefully survive intact.


Changed formatting isn't a problem IMHO. You can ask git to skip whitespace changes, and you can (and should) enforce consistent formatting anyway to make history cleaner. And even if for some reason you don't want to do either - you can just click "blame previous revision" if you do encounter "changed formatting" revision.

I envountered it a few times and it was never a big problem.

The problem with comments in the code is - they have to be maintained "by hand", and they are often separated from the context after a few independent changes.

When you change a function called inside the foobify function because of another change request you will most probably forget to check all the calling places all the way up the callstack and fix the comments refering to foobify function. And then comments may start to lie.

I encountered lying comments a few times and it usually is a big problem. I started to ignore comments when debugging, and I'm not the only programmer that I know that does that.

I do think there is a place for comments in the code, for example for documentation of some API, but IMHO commit messages are the perfect place for explaining reasons of particular change, and I prefer not to repeat that.

http://cdn.meme.am/instances2/500x/4310914.jpg


> That's the main advantage of commit messages over comments - they are always fresh and encode the exact lines they refer to.

Unless they're something like "fixed indentation".


In which case it's one click away. And anyway you should enable checkstyle and autoformat on saving files, and you can use "ignore whitespace changes" in git view for legacy code.


1) Vector3 computeNewtonianGravity(float massA, Vector3 positionA, float massB, Vector3 positionB);

Which way does the returned force vector point? Towards mass A, or towards mass B?

2) Vector3 UnitVectorWithDirection(Vector3 originalVector);

What does this function do when the magnitude (length) of originalVector is zero?

3) float ArcTan(float x);

What is the allowable range of values for x?

...


While I don't agree with the general sentiment of vinceguidry (e.g. I think comments answering "why?" are very important), for your specific counterpoints I do tend try solve it in code if possible:

1) Vector3 calcNewtonianGravityToB(float massA, Vector3 positionA, float massB, Vector3 positionB);

2) Vector3 unitVectorWithDirection(Vector3 originalVector_mayBeZero); vs. Vector3 unitVectorWithDirection(Vector3 originalVector_throwsIfZero); (if in a supporting language, throws declaration would also clarify).

3) float ArcTan(float x_throwsIfZero); or float ArcTan(float x_returnsNaNIfNot0to1), etc.

This assumes:

(a) You're not working in a language or environment that supports range constraints in the first place, cause if you are then that's ideal.

(b) You're working on personal code or a small team. If you're writing code for public consumption you have no choice but to add detailed API documentation if you want to be successful, even if it's not DRY.


The sibling comment about renaming is good - particularly with regard to the first function. However I think the other two are better suited to documentation rather than trying to make the function signature explain itself. Sooner or later you'll just remember that it takes a float, not the variable name.

Certainly calling things like "x_returnsNaNIfNot0to1" works, but I find it a bit ugly and it gets complicated if you have multiple or more complex constraints.

This is where languages with docstrings are nice. In Python all I would do is add a """ comment describing the inputs and the return values.

You then have a) a comment describing the code; and b) documentation that's standard so people can pull it up with pydoc or ? in Ipython. When I'm working in Jupyter, I often hit shift-tab to check what a function is expecting.


Was just looking at the Oculus SDK math library, and they have some really neat ideas...

For one, handling rotation sign (is positive clockwise or counter-clockwise), left-handed vs. right-handed coordinates, etc. through c++ template parameters.


I prefer to put that kind of stuff in the javadocs/xmldocs, but I mostly work in C#/Java, with IDEs that have tooling baked in that make generating that kind of documentation A.) really easy to generate and B.) very useful in auto-complete lists, mouse-over popups, etc.

Ideally, you'd have some tests as well that would test those kind of corner-cases and illustrate failure cases and expected outputs.


When you are using a particular method that is required by the context of your work rather than code you have control over. Perhaps the method name in the framework you are using is non obvious, or there is a bug in the way it works.

Sometimes comments are just to save future you some time or help other developers find context.

Suffice to say any project of significant complexity will probably require comments at some point. That complexity can come from the code, the task, the stakeholders or the dependencies.

Unless you are happy with a method name like getSimpleProductInstanceThroughCoreFactoryBecause ModuleRewriteWillBreakCompatibilityWithSecurity PatchSUPEE5523 ()

In that case I guess you're right.

Edited to stop the method name stretching HN layout.


Break the long method up into multiple methods, each implementing a different fix. Bundle these methods up into a gateway class.


That introduces unnecessary complexity in the code for something that could just be explained in a comment.

There were no fixes performed in the method, the method just gets a product instance from a factory. The comment gives reasons for why it was using a core factory directly instead of using the automatic object resolution mechamisms.

The real method name was getProductInstanceFromCoreFactory. Still long, still clear as to what it is doing. But making the context clear would be more code than it is worth.

Avoiding comments by writing clearer code isn't a bad habit, but my point is that they are very useful to provide context for something that getting context for would otherwise be erroneous or cumbersome to implement.


A good marker for when comments are going to be useful is when something stumps you, and then you fix it and/or figure it out. Chances are, when you read the code again it will stump you again - or on re-reading you might not even realize there was an issue. So I comment the change. It's usually something simple like, "This cast avoids a special case in method X", or along those lines. The comment is the "why" where the code is the "what".


If I run into that situation, I refactor the code so that I can better understand it the next time. To not do so is to waste all that time you spent understanding it.

> It's usually something simple like, "This cast avoids a special case in method X", or along those lines.

If I had an issue like that, I'd fix method X to be more accepting of unclean inputs.


The problem is that method X is a remote invocation into another system that has different change-control procedures, a different ticketing system, and a different release schedule.


I'd use a gateway class and intention-revealing method names, even if all that method was doing was casting a value. I'd call it ".edge_case_fix", (but described better than "edge case") and do the same for every weirdness in the external system that requires workarounds.


I find one place where comments are absolutely critical is in external facing APIs. Summary of the purpose of the API, purpose of each method, valid arguments and possible return values. Think how often you need to read the comments for whatever libraries and APIs you use in your work.


I deal with two types of APIs. Well-documented and otherwise. For well-documented APIs, I can simply re-read the documentation to figure out the other side of my code.

For the rest of it, I try to write as cleanly as possible, and as robustly as possible the first time, so I'm not in the dark the next time I have to look at it. If I have to take time out to re-understand how it works, I'll just take the time and not feel guilty about it. The company wanted me to use that dreck, I'm not going to feel bad about having to take more time with it.

I don't think comments would help much in the second situation given decent engineering the first time around. If it's my code, I'll usually recall the idiocy I had to work around when running down whatever issue had me looking at it again.


Comments are incredibly useful for that bit of code that you spent hours trying to make it work and you couldn't figure out a way to name things well or to improve it. That's where a comment is priceless.


The downside of course is that comments aren't compiled/run - so they often become out of date. "Often" might actually be an understatement.

I can't tell you how many times I've been reading a comment that runs quite contrary to what the code really does. You then end up reading the code to reason about it anyway, paying two taxes. After too many of those experiences, you just end up going straight to the code as the source of ground-truth.

This all depends on the readability of the code / and how soon these comments get out of sync. Maybe my experience is atypical, but I doubt it.


Names for classes, methods, parameters, and variables all suffer from this same problem. I think the solution to align code and comments is code reviews. Not writing comments also works but fails to solve related issues.


Where I work, I've always required that code that's submitted for review has a descriptive commit message. That is, the commit should explain what the problem was and how it was fixed, and why it was done that way.

This way, I can run git blame (or the equivalent VCS command) on the file and get a reasonable set of comments that usually match up with the code.


I'm sure a lot of people are hoping that one day we can generate code just by writing the comments and leaving the machine to do the rest.

I certainly am.


More structured alternatives to comments make that kind of thing much more possible, e.g. https://hackage.haskell.org/package/djinn


Maybe it's because I write in Ruby, and so never have to do any tricky optimizing, but if solving a particular problem starts to run over a half hour, I look to re-architect the project, either by reaching for a gem or telling my boss that X is too hard and we should do Y instead. My patience for going down rabbit holes has mostly gone over the last year.

Also, again perhaps because I use Ruby, it never takes me more than a few seconds to name something. If it does, then it's a code smell and I go looking for the missing class.


I don't see how Ruby saves you from optimising (to my limited knowledge, it's not exactly a fast language), but I agree with you that naming stuff sensibly and extracting functions (which you can then name sensibly) is most important for maintainability and can make "in-code" comments unnecessary in many cases. However I strive to always document what a function does if its not obvious -- though I'd call that "documentation" and not "comment".

Example for obvious: Int add(Int x, Int y) in a typed language. Example for not obvious: add(x, y) in an untyped (or "dynamically typed") language (Does it auto-coerce? How? Can it add complex numbers? In my particular representation? ...).

Someone mentioned that sometimes comments are useful e.g. to document a quirk/bug in library function you call, and I have some examples of that in my own code. But most often you should be able to rectify that by wrapping said function in your own one that omits the problem.

If a function seems impossible to give it a sensible name that isn't ridiculously long, split it up. The more clear code of the individual parts and how they are combined should give a hint of what is actually computed here. Maybe it's a new concept in your business logic, in which case providing a clear and exact definition makes sense anyway (put it into the appropriate place in your documentation). This whole procedure can take a significant amount of time, but it will be worth it in maintainance!


> I don't see how Ruby saves you from optimising (to my limited knowledge, it's not exactly a fast language)

That's precisely why you don't optimize. If you find you need fast code, you use a different language. Ruby is the language you use when maintainability and extendability take priority over speed. It's excellent for web development, where any speed improvements you make will ultimately be dwarfed by network latency.

> Example for obvious: Int add(Int x, Int y) in a typed language. Example for not obvious: add(x, y) in an untyped (or "dynamically typed") language (Does it auto-coerce? How? Can it add complex numbers? In my particular representation? ...).

That's a good observation, and it's made me think about how I deal with this in Ruby. First, generally you can tell by looking at a method's code what it's expecting you to pass to it.

Second, you don't generally pass around complex types to library functions, you use basic Ruby value objects like strings, symbols, hashes, and arrays. A gem will often define its own classes, which you might pass objects of around, (money, phone numbers) these will often be the primary focus of the gem, and how to use these objects will be written right there in the documentation. These classes will typically have "parse" functions that will take random input and turn it into a more useful object.

In Ruby, you generally only pass around complex objects in your own code, using JSON or some other format to interact with external systems


> That's precisely why you don't optimize. If you find you need fast code, you use a different language.

Ah, now I get you :)

> First, generally you can tell by looking at a method's code what it's expecting you to pass to it.

Here we might differ, I would always prefer to state clearly in the function doc what types of parameter values are allowed. For example, even if you have a really simple little wrapper in javascript:

  function log(x) {console.log(x);} 
Without documentation, you have to know what console.log can do for different types. So I'd definitely prefer this:

  /** 
   * Logs x to console.
   * @param x a value of a primitive type (other types are not guaranteed to be logged in a readable manner). 
   */
  function log(x) {console.log(x);}
> A gem will often define its own classes, which you might pass objects of around, (money, phone numbers) these will often be the primary focus of the gem, and how to use these objects will be written right there in the documentation.

Yes exactly, it will be documented as any public API should be. I have no problems using opaque types. But a function call(x) which expects x to be some object representation and not any old string (for which the library has constructors, e.g. PhoneNumber(string)) should surely document this, no?


In my book it goes 1. types 2. tests 3. method/variable names 4. comments. If you can't make the code clear any other way then add a comment, but it should be a last resort.


I find that sometimes I am diving into a codebase and wish I had a higher-level map of where things are.


/* Yes, this is wrong. The output goes in the db and needs to match /path/to/code/in/other/system.ext */


Kind of old code syndrome. I would say it is more like changing from Java to Ruby on Rails full time, and looking at your old Java code which is now a rusty language, with rusty memories of the class libraries and frameworks you used. Even with beautiful code it will be a struggle.


Completely agree.

"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?" -Brian W. Kernighan

I read a book by Brian Kernighan a few years back, "The Elements of Programming Style" and for me it has so much good advice. The book may not be as relevant in the days of Ruby, C99 and the like, but valuable advice nevertheless.

I wonder if reading the literature (countless books written by the early adopters and renowned engineers) would mould the programmers to do things differently. For instance, reading "JavaScript: The Good Parts" completely changed the perspective with which I look at the language. It made me fall in love with it.


I've just started playing around with JavaScript and I'm definitely gonna check this book out now, thanks.


Not just code.

When you design anything - a rocket engine, an immunoassay product, whatever - it should be part of the work to document the assumptions, calculations, and rationale for the design to a degree that somebody skilled in the art could follow your work when you're gone.


That doesn't help. Arthur has written a very nice hundred page introduction to the trace formula. You can read that, and it won't tell you why you should care about the trace formula. But if you know Langlands-Tunnels and how it was proven, or how Jacquet-Langlands was proven, you already know to care about the trace formula because you know that it will prove similar results.

The abstraction is necessary: you cannot do algebraic geometry with varieties alone. But while the core of the subject is a multithousand page jewel of abstraction, Vakil is a perfectly readable, example filled introduction to algebraic geometry, covering most of the contents of the glory that is EGA.

Only the rare lemmas are intrinsically interesting. Most often it's the theorems that are worth knowing as steps on the path to understanding something much more. Those lemmas that get used again and again are named, often descriptively.


The one thing that has always bugged me about lots (most?) open source software is the almost total lack of comments. I consider that bad programming. I have projects dating back 15 years in both FPGA's and software that I've had to go back into and maintain or borrow from. Comments, for me, has always been part and parcel of code writing.

I sort of have a conversation with myself as I document code where I tell myself why something needs to be done and, if necessary, how. As a result of this anyone can go into my code for any project at any time and find their way around. In fact, they don't even have to be domain experts to understand domain-specific code because I often take the time to document such thing pretending I am just learning about it (to a point).


I'm recently picked up an Angular-like JavaScript framework I built from scratch after about 9 months (working on it now actually) so I know exactly what you're talking about.

Over the years, I've stopped and started about 3 pretty major personal projects and through that I progressively got better at writing and documenting code in a way that make it easy to pick back up after a long break.


I've brought up this issue in mathematical circles before and have encountered incredible resistance. The culture is very deeply entrenched, and the current stakeholders are not going to allow it to be changed.

Just to briefly establish that I have some qualifications here, while I do not have a PhD and am not nor have I ever been a professional mathematician, I did have a lot of success in mathematics as an undergraduate, publishing research in the intersection of algebraic topology, differential geometry, and analysis with a professor at an Ivy League university.

I was myself a practitioner of the school of It Is Clear That..., until I realized that my mentor for the above research, a famed professor and former Putnam Fellow who even has inequalities named after himself, didn't follow some of the stuff I'd written in some notes I showed him. And if he didn't follow it, then I knew that meant I probably didn't understand it as well as I thought I did. It wasn't a big surprise to me then that, after a few weeks of effort, I was completely unable to take my approach further. His approach, of course, worked.

He represents one of few mathematicians I've personally known who emphasized clarity just as much as correctness and technical depth, and he had no desire to appear smart. But at this point in time he already had his career mostly behind him. There was nobody left for him to impress.

Part of the problem, ultimately, is that phrases like "it is trivial that..." do serve a purpose when used appropriately. As a result, it's hard to argue that they should categorically be excluded from mathematical writing. But that then opens them up to abuse, and in a culture so obsessed with appearing smart, that abuse can be quite extreme sometimes. Because instances of abuse are often motivated out of appearing smart rather than legitimate economical concerns of space, bringing up the issue with the author amounts to a moral accusation against them, which makes their defense of the phrases even more impassioned. They're not going to admit that they're just trying to puff up their ego.

The worst part is that, in some circles, if you criticize this sort of writing you'll just be greeted by a bunch of people who are proud to exclaim that, of course, they are smart enough to fill in the blanks, and if only you were too then you surely couldn't possibly still have any objections to it.

For me, transitioning from mathematics to software development was a huge breath of fresh air, as the culture is (generally) the exact opposite. Nearly every good programmer I've met has valued clarity and maintainability equally alongside correctness.


I am curious, how would you assess the readability of my code:

http://github.com/EGreg/Platform

I kind of developed that style organically as I went along, in both PHP and JS.

I tried to make the output readable as well. I've seen many frameworks output a bunch of gibberish, but with today's bandwidth it's not such a big deal to add a little bit of whitespace and make everything readable. Here's a fairly complex application, you can view the source of any page:

http://qbixstaging.com/Groups/


I can give you some feedback since I tend to write a lot of PHP.

Mostly, code style is about consistency. You can have infinite arguments about whether opening braces should go on the same line as the operator/function, but at the end of the day as long as you stay consistent it doesn't really matter.

Here are some consistency issues I found. Your function declarations put the opening brace on the following line, but flow controls open on the same lines. Ex: https://github.com/EGreg/Platform/blob/master/platform/plugi...

In the same vein, commenting seems inconsistent - this file's docblock opens with /* */ style comments, but then you switch to // style. Ex: https://github.com/EGreg/Platform/blob/master/platform/Q.php...

Further down, you use two styles of if(), one with braces and one without, right next to each other: https://github.com/EGreg/Platform/blob/master/platform/Q.php...

I much prefer 2 space indents to tabs. To me, it's much more readable. I noticed your HTML output is also tabbed, which (arguably) needlessly increases the horizontal width you need to see a page of code, especially with modern deeply-nested structures.

Overall the project looks well thought out, and you may have valid reasons for any/all of the above. Just throwing in my initial reactions as a project outsider.

Also, dunno if you know about PSR, but here it's a good starting point that many people (potential contributors) are familiar with: http://www.php-fig.org/psr/psr-2/


Thanks for the reference! PSR-2 looks useful.

I guess I wanted to know more feedback about the design and architecture of the system, the names for things etc.


There's a bunch of directories where you keep the PHP and JS files in the same directory, like this one: https://github.com/EGreg/Platform/tree/master/platform/plugi...

Any special rationale behind that approach?


>> of a fairly new developer who's just been asked to do non-trivial update of his own code for the first time

fairly new has nothing to do with it. I'm struggling to read my own code since 1998 and constantly in battle with myself to strategically place comments at the right places. I doubt that I'll ever correctly estimate my own ability to comprehend my sh1t no matter how many years of experience I have. In fact it seems to have gotten worse with experience. The more I know the less I can trust myself. Not sure if I'm alone with that.


I've been writing code professionally for 25 years. You're not alone. It does get worse. Junior programmers just don't get it and jump into management before they have a chance to.


Oh that's good to hear. I was getting a distinct "Flowers For Algernon" vibe from the title :)


And a lot of it comes from trying to sound smart instead of readable.

Sorry, but if I was this guy's advisor, I'd keep his ass in school until he took away his pretentious "it is obvious from ___ that"'s.

Ph. D education just isn't what it used to be.


Mathematics papers (dissertations included) are meant for active researchers. Perhaps that implication really is obvious to an active researcher in the field. The fact that it is not obvious to someone outside of its intended audience is not a mark against the paper. If I said to someone in machine learning that by assuming zero-mean Gaussian priors for the coefficients, we are encouraging small values of the coefficients, that would be really really obvious to someone in ML/stats/data science (I've restated a really basic statistical learning concept), but not to a random educated person.


> never mind five years!

Some bits of the DMD compiler back end go back 32, 33 years!


Ironically my professor advices me during my thesis not to add all of my source code to the appendix. Yay, let's make things not reproducible :D




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: