By the way, write your plus sign (+) and lower-case letter Tee (t) so that they don't look identical!
From my experience, I'm one of the very few people who write manually in a serifed font; in particular, I write 1, l, I, and | very distinctly and use a slashed zero so as to distinguish it from the letter O. I wish more people would do this since it is still sometimes necessary to communicate on paper, and trying to interpret handwritten notes with ambiguous writing can be quite annoying.
Unquestioning faith in calculators.
The same can be said for computers too - except the problem is often worse as computers are far more complex to the point that almost no one understands them fully, and we are encouraged to accept this.
I write 1, l, I, and | very distinctly and use a slashed zero so as to distinguish it from the letter O.
My handwriting really similar. I use the cursive "l" in equations. Also I'm using Mathematica's i simbol for the imaginary unit [1] since teachers often forget about this and start using "i" for indexing sums too. It's quite common they appear in the same exponent.
I studied undergraduate mathematics, and I think it's a field which requires the student to spend as much of their own time as needed to understand a concept. Perhaps this could be said of all fields, but there is just limited time in a class to absorb, e.g. Euler's Formula, because there are several concepts in play, and if the student doesn't have a firm grasp or recollection of one of them, then they get lost at one step on the chalk board, and the rest of the derivation is useless to them. This happened to me many times, and I'd just have to make a note to go figure something out, and the rest of the class was basically useless to me. Once it became more acceptable to have a laptop in class, I could just just look it up right then, and not have to wait till later, but I still just tuned out the professor.
Perhaps this could be said of all fields, but there is just limited time in a class to absorb
Which brings us to that what can you expect to actually learn in a class anyway?
I'd say that for any non-trivial curriculum (which ought to include most of a university...) you need to read about the subject yourself and do your own learning. And once you do that where do you need the classroom? For asking questions, maybe? But a classroom probably isn't the most effective way to ask questions.
When I teach calculus I give an hour-long-talk version of this document. I particularly emphasize that infinity is not a (real) number, so any arithmetic I see them doing with infinity will be automatically wrong; and that "equals" will be overloaded, and that most professors/TAs/tutors will not point out that there are different kinds of equals signs.
The stream-of-consciousness notation section also rang true. It's a huge frustration and time waster for everyone involved.
I particularly emphasize that infinity is not a (real) number, so any arithmetic I see them doing with infinity will be automatically wrong
I agree with you for the purposes of teaching undergraduate beginning level courses that involve mostly the real number system (and only incidentally the complex number system). That said, our learned fellow participant impendia here on HN, a professor of mathematics, has strenuously disagreed with me by pointing out the specialized number systems that do treat infinity somewhat like a number. On my part, for the students I encounter, I stick to discussions like "All about Infinity"[1] (formerly titled "Infinity Is Not a Number - It's a Free Man") Katherine Körner, another astute mathematician. I have frequently seen discussions of infinity as a (real) number here on HN that essentially boil down to the error of treating the quotient upon dividing by zero as a real number.
I usually level with my students about this, but with the caveat that any serious use of infinity involves a precise and technical definition, in particular one that is not guessed on the fly as students tend to do. To illustrate, I then talk about how in elementary calculus infinity is a precise and technical shorthand for the idea of unbounded growth.
[Edit:] I also recently had the wonderful opportunity to talk to really advanced high school students about projective geometry, and I contrasted the difference between the way people talk about infinity (the point on the horizon! I'm just making it up!) versus the precise and technical definitions (a specific point in a quotient of a vector space).
Shouldn't be too difficult. I haven't done much teaching and could write a short list. Unfortunately, I don't think it would be very surprising to most people (just as this list probably isn't surprising to most math majors, since we've all made at least one of these "stupid mistakes" before, even when we knew better, especially in the early years. The difference -- and this is reiterated several times -- is that good/smart students notice their error and debug their math/program, instead of just going with it or fudging their result).
Here are a few:
* Not actually thinking of a solution to the problem before starting to write a program; not soliciting requirements.
* ALL of the mistakes covered by this article. Struggling CS majors are often (not always) really, really horrible at math, and this -- more than anything else -- really holds them back from writing correct programs.
* Off-by-one and the functional equivalents
* Infinite loops in exception handling
* not enough input validation; too much input validation
* Reinventing bad versions of existing algorithms (Dijkstra's algorithm is a good example) and in general not enough research before implementation.
* The other side of that coin is taking stack overflow upvoted answers as gospel (basically our equivalent of trusting the calculator)
* Fundamental incomprehension of boolean algebra, which gives rise to all sorts of errors:
incorrect paren placement
Obscenely complicated conditions and/or absurd if conditions because they don't understand boolean algebra (e.g. I've seen conditions that eventually simplify to a || !a)
Complicated programs and grandois bug-hunting because they couldn't figure how a simple boolean expression (there was a post on HN a while back about Javascript == vs === where the developer basically wasted a day going down a rabbit hole he attributed to == vs === but was actually completely avoidable if he had taken an undergraduate discrete math course that hammered home boolean algebra.)
* As a general rule, any program written by a student containing concurrency is always wrong, unless concurrency was explicitly taught (many schools just have a short unit in a course or two, instead of integrating the topic throughout the curriculum).
- Missing the base case (or not realizing there are multiple)
- Not understanding/using variable scoping
- Insufficiently-exhaustive case checking (especially if there are multiple variables)
- Using built-in data-types that do not have the functionality or invariants that you need instead of making custom types
--- (subcase) Using the wrong number type: Using floats for any non-integer, when sometimes you want Decimal or need to hand-roll a fixed-point number; Not thinking about the size of your integer type.
--- (subcase) Encoding data as strings, parsing out and concating in as needed. For some reason, this is surprisingly common in 101-level beginners.
Folding with the wrong initial value. Recursive functions where the base case is improperly defined. Stuff like that I consider 'functional off-by-one' because the student understand the general shape of the correct solution but gets a detail about iteration/recursion incorrect at the boundary.
edit: oh, also, I guess literally off-by-one errors are also possible in any language that allows any sort of side effect, but that's kind of a stupid degenerate case :-) (edit: possible, definitely not popular... wrong word there, sorry).
One realises how confusing and complicated are the standard notations when one implements a programming language. You would prefer Lisp/Scheme or RPN notations, but they both scare away a lot of people, who prefer "natural" notations. People don't realise that they probably hurt them as much as they help them.
It seems tempting to have a single unambiguous notation for mathematics. But
In constructing such a language, one will quickly realize that doing mathematics becomes an intensely arduous task.
This is not unlike recent discussions about the conlang Ithkuil on HN.
For better or for worse, math notation has for the most part been optimized for writing on paper or a blackboard, using context to eliminate "inessential details". Local edits are generally easy, and pieces of notation are not always so tightly coupled. (For example, we can put a "for all x" at a distance from some equation.)
Have you ever attempted to write Lisp on a blackboard, then needing to insert an expression in the middle? It's extremely difficult because you have to perform a quadratic process (erasing everything after and reindenting) to make the Lisp readable. At least this is what I've noticed in interviews. You can sometimes patch your Lisp forms with lines and arrows, but it's the quickest way to get spaghetti on your blackboard.
RPN is similarly useless for the task. RPN is exceedingly easy to write, but not so easy to read. Also, higher level mathematics becomes unwieldy in RPN.
Sussman and Wisdom in their book "Structure and Interpretation of Classical Mechanics" did take a different approach to math notation in order to make it unambiguous for that relatively small sub field. It looks like regular math notation but written in such a way to make things more easily computable. I'd say they were successful, though their work would be difficult to generalize.
> Have you ever attempted to write Lisp on a blackboard
What's a blackboard?
Seriously, blackboards are pretty obsolete technology. Designing a notation to optimize for blackboards in this day and age is kind of like designing roads to accommodate horses.
There are two situations which black/white boards are awesome for.
-Giving lectures in subjects which desperately need drawing and writing. Look at what happens at 1:16:27 here
https://www.youtube.com/watch?v=BPSEpDq6QYc
The speaker can just go draw a picture, in response to a question. I know of no alternative which can do something like that nearly as well.
-Collaborating in subjects which need drawing and writing. I can stand next to someone and talk over a problem. They write some formula on the board. I insert some additional bits and pieces that they missed. We draw pictures.
I'd like to hear what you think replaces black/white boards.
The black vs. whiteboard thing is a whole different issue :)
Look, I have nothing against blackboards, just as I have nothing against horses. Horses are really handy in some situations. If you're in the wilderness and you need to cross a stream, a horse can be just the thing. There's no technology that can compete with a horse in that case.
But to constrain your infrastructure (notation in the case of mathematics, roads in the case of horses) according to the needs of a blackboard or a horse is, IMHO, a serious mistake in this day and age. If you design your roads for cars instead of horses you get tremendous productivity boosts, even as you lose the ability to deal with some edge cases.
Notice that to find an example of the real utility of a blackboard you had to bypass >95% of the lecture and go to the very end. Imagine how much better things would be if the rest of the lecture had been presented as source code that a student could analyze and manipulate and error-check using some automated tool.
The great thing is we're not constraining our notation. As reikonomusha said, the standard notation is easier to read. Other notation is better for programming or certain things, and that's what we use there.
Re: your last paragraph. I only went to the end because I knew that there must have been a good example in the questions. If you want I can give you examples from the middle of a talk.
Only because you're used to it. In fact, standard notation is much harder to read because it's ambiguous, often to the point of actively introducing errors. See:
No, I don't dispute that blackboards are useful. What I dispute is that their utility is so high that we ought to design mathematical notation around their limitations.
Ok, I can agree that we shouldn't design notation around their limitations. And I do like what they do in SICM. But, I'm just trying to root for the point up-thread:
>It seems tempting to have a single unambiguous notation for mathematics. But In constructing such a language, one will quickly realize that doing mathematics becomes an intensely arduous task.
When talking about math, our notation doesn't have to be precise, and that's ok.
> No, I don't dispute that blackboards are useful.
Just obsolete :P
To constrain your infrastructure according to the needs of a computer is also silly. Imagine if, in order to hum a tune, one needed to write sheet music using a programming language. Or if every spoken conversation were halted the instant a word is used incorrectly.
Mathematics (and a lecture on mathematics) is closer in nature to a conversation than a road.
Those are stupidly irrelevant examples. Nobody's suggesting that you would need to have a Coq parser between your keyboard and your display. You can type an incomplete or invalid expression just as easily as you can write one, but only with a computer can you have your statements automatically and reliably checked and errors flagged in realtime.
Essentially, that is the argument. If the point is to allow students to manipulate the lecture as data, then it must be error-free. That is literally putting a parser between the lecturer and the students.
And your condescending comment does not address my point, which is that a lecture would not benefit from (and is actively harmed by) the "features" being suggested. Real time error flagging would be extremely distracting, and writing mathematics as source code would be tediously slow and again distract from the point of understanding the mathematics.
Enabling multiple users to interact with the equations in realtime as they are being written is hardly the only possible benefit of using computers to communicate math, and even so it only requires that the equations be tokenized to be manipulable, not that the whole expression be completely error-free and the parser be running in an enforcing mode.
And your claim that writing mathematics as source code is too slow is very much lacking in proof. All we can say with confidence is that syntax like LaTeX markup on a standard keyboard layout is too inefficient for realtime use. This does not mean that realtime use is impossible if you allow for a more complicated IME and for a different final notation on-screen than the current standard math notation.
Computers can have pen input too for situations where that's more efficient, and for less expense than that lecture hall's complicated apparatus of multiple sliding blackboards to get around the finite drawing area limitation that computers don't have.
Blackboards don't generally crash, or have parse errors, or have encoding errors, or have usability problems. If you have chalk and a blackboard and have some semblance of an ability to write, you can use it to its fullest extent.
But they suffer from “memory exhaustion” rather easily, to the point where even several blackboards on a funky roller system might be insufficient for a one hour lecture developing a complicated proof.
And the garbage collection causes a serious interruption and sometimes misses things.
have parse errors
Your lecturers obviously had much more legible handwriting than some of mine!
have encoding errors
Well, an encoding where P, p, and ρ all occupy the same code point might be considered ill-advised, and one where m, n, r, u, v and w may variously appear distinct or not depending on the display device in use is downright mischievous.
have usability problems
Does blocking half the lecture theatre’s view every time you write up a new formula count as a usability problem?
> But they suffer from “memory exhaustion” rather easily, to the point where even several blackboards on a funky roller system might be insufficient for a one hour lecture developing a complicated proof
This is more of a problem with people or time constraints, not the medium.
Computers nowadays don't generally crash either. And when you say "Blackboards don't have parse errors" what you mean is that blackboard never tell you when you've made a mistake (because they can't). That's not a feature. It's easy to program a computer not to tell you when it detects an error. But there's a reason this is not often done: detecting errors automatically is tremendously useful, especially when you're doing math.
Blackboards might be obsolute (especially given whiteboards), but free form written math is not. You run into many of the same constraints regardless of if you are writing on a blackboard/whiteboard/paper/tablet.
Different communication mediums are better / worse for different tasks. Lisp might be great for formally writing something down, where you want to have zero ambiguity. That's not the way human conversation works though, we are always eliding details based on the context in order to communicate quickly.
The blackboard is no different - it's not the perfect way to communicate ideas but it let's us communicate ideas quickly.
The horse thing seems like a bad analogy. Whether you're using a blackboard or some fruity iPad or whatever it is, it still must go through your eyes before you read it. So you don't "optimize for blackboards", optimize for people is what you do.
I guess that depends on what you think math is for. If you think the purpose of math is merely to provide humans with intellectual stimulation then yes, it makes sense to optimize the nation for human consumption. But if you think that math is actually good for something besides being a distraction from existential despair then rendering math for human consumption might not be the thing you want to optimize for. Instead you might want to use a notation that, while it can be rendered for human consumption, isn't optimized for that, but is instead optimized for, say, automated error detection, or automated compilation into some other form, like an executable program or a design for an FPGA.
The closest I can come to an analogy in the horse/car world is that in the horse world it makes sense to dispense water in troughs to make it easy for horses to drink. But despite the fact that water and gasoline are both liquids, it might make not make sense to dispense gas in the same way you dispense water.
Going forward, the ideal notation would be suitable for both digital and analog formats while being concise but comprehensible.
It's hard to hit all of those points. Lisp falls short (neither concise nor suitable for analog) as does the existing notation (not as comprehensible and not suitable for digital formats).
Mathematicians still fight college building committees and IT people to get blackboards in newly-built classrooms. (Why IT people? Dust + classroom computer system/projector/etc.)
I have no hard data on this, but dry-erase markers seem to be used up more quickly than chalk.
Chalk is also less expensive. As one data point: 48 sticks of chalk for $4.30[1] versus 12 markers for $7.74[2]. I'm not sure about the bulk-price comparison, or if colleges can get dry-erase markers for cheap. Although, many of my professors carried their own supplies as markers left in rooms were usually stolen.
There could be a health comparison to be made between dry-erase marker fumes and chalk dust, but I know nothing about it off-hand.
Crayola chalk is horrid. I gladly pay premium for well-made chalk (and I suspect most mathematicians prefer high quality chalk), which makes white board markers significantly cheaper.
Some of them are legit: older professors often have better handwriting at blackboard; chalk is sometimes better for certain drawings; left-hand smear; etc.
I suspect the biggest reason is that, in math circles, blackboards have a much larger cool/nostagia factor.
Non-standard analysis (e.g. http://www.sjsu.edu/faculty/watkins/infincalc.htm -- another example of awesome web page design) does formalise the concepts of infinitesimals. It's quite neat to have a complete alternate formalism for calculus.
That's really cool. However it's important to note that it nonstandard analysis does not 'live' in the real numbers. It turns out the hyperreals aren't even a metric space!
I'm not intimately familiar with them though, and have heard that they are a lot more lucid abstraction for some things when compared to calculus on the reals.
I disagree that 0^0 is undefined. I would argue should be 1 and the function 0^x is not continuous at zero. The basic definition of exponentiation for integers is n^m is a product of m instances of n. Because the multiplicative identity is 1, a product of zero numbers is always 1. Therefore 0^0=1.
Many things in mathematics are defined in a manner that is consistent and convenient. Defining 0! to be 1 is a similar case. In doing so nothing goes wrong, and the binomial theorem becomes simple and convenient to state. Without defining 0! as 1, it's a dreadful mish-mash of special cases.
Similarly with 0^0. Considering x^y where x and y are complex numbers, there is no consistent single value as x and y each approach 0. So in the reals and the complex numbers we leave 0^0 as undefined. However, in the case of natural numbers there is a case for declaring 0^0 to be 1. That's to make it convenient to talk about the set A^B as the collection of functions from B to A. When we do that we get |A^B| = |A|^|B|.
There is more than expected in mathematics that's defined for convenience.
This argument assumes that the function f(x,y)=x^y is continuous near 0 and therefore lim (f(x)^g(x)) = (lim f(x))^(lim g(x)) at zero. But since it's not continous, that formula does not necessarily hold true.
It's less controversial when the terms are integers, but even there an argument can be made for indeterminacy. As the linked article says, "There is no one definition that always works well for 0^0"
I can use 0^0 = 1 to prove that 1 = 2, but I'm sure you can anticipate the argument's form.
> "There is no one definition that always works well for 0^0"
Yes there is. :-) 0^0 = 1.
Actually, the only case for claiming it to be an “indeterminate”, comes from so-called “continuous exponents”. Which, arguably, something that never occurs in reality — only in exam sheets by lazy calculus teachers.
Whenever you meet an algebraic equation with sum over 0 <= k <= n… and there's some m^n or m^k in it, it's always only true when 0^0 = 1. I don't claim I've seen them all but really, try to find a counterexample. What exactly was that argument for indeterminacy you were talking about?
Those equations come from reasoning about meaningful entities, not chimeras of “x^x”, or “x^y”, or worse. Again, try to find, say, a physics paper with x^y in it. I haven't read much physics papers but I'm pretty sure you'll find precisely 0.
As one mathematician I knew put it, “hard analysis is the primary source of all obscurantism in mathematics out there”. ;-)
From the bookmark given above, search for the phrase "That reminds me of a related question that seems to bother many students", followed by a brief and instructive exposition on the indeterminacy of 0^0.
The only concept a general expression of the form x^y could possibly represent is the space of mappings y -> x. There's one and only one such mapping when x and y represent empty set. Arguing this to be wrong as in “but it can't be 1 because there are no mappings to empty space from non-empty sets!” ( = “0^x = 0”) is plain ridiculous. This definition simply does not fall apart. 0^0 is not a special case for it in any way.
Enumerating numbers one can represent with 0 digits put in a string of length 0 leads to the same conclusion. It's clear that there is one and only one string of length 0, unless you demand it to contain more than 0 digits, in which case there are none. [However, this combinatorial problem is not an independent one: it's equivalent to enumerating mappings from space of strings to space of chars.]
In other words, not only there are definitions that work well for all cases, including 0^0, there's actually only one such definition.
> I'm sorry but your links lead to the same continuity example that has nothing to do with real world (and real mathematics, as well).
I guess that would explain why it's located in a litany of common student errors compiled by math educators, as well as the other reference I provided.
But you know what? I'm not interested in posting to a thread that downvotes posts with a probability proportional to their accuracy and relevance.
The limit of x^x does equal one when approached from the right (i.e., when x is positive, reducing to zero). From the left, it also approaches one, but is only continuous over the complex numbers. Anyway, while the limit approaches one from both sides, the function cannot be evaluated at x=0 and so is undefined.
Just because the function has different limits at some point does not mean it is undefined at that point. Consider the signum function. It has limit -1 when approached from negative side, limit 1 when approached from positive side, but is actually defined as 0 at zero.
Mostly off-topic, but I found the Altavista linkback search link on the top of the page really... something. "Cute" would be the wrong word, but it comes closest to describing what I felt.
"As we've used all the letters of the Roman alphabet, and the Greek one too, I'm just going to write these letters really bold; take notes carefully, there are no distributed notes nor textbook for this freshman analysis course."
I had a calc prof who spent most of a class lecturing on "RTFT," where the last T was Text Book instead of M for Manual because he thought somebody asked a stupid question.
To be fair, the question was kind of dumb. But the textbook was also worthless and created to generate revenue for the math department. So it's unlikely his proposed solution would have helped the student in question.
Not to mention that the lecture on RTFM didn't help anyone learn math anyway.
The student who asked the question probably never really understood the math (wrote learner), but did become the prof's office hour best friend and got a good grade anyway.
Anyway, file under the teacher hostility section of this article, I guess.
One of the most bizarre errors I've seen undergraduates make is in induction proofs. I saw it so many times that I decided a TA must have been telling them to do this.
They would always show the base case correctly. Then they assume it's true for all positive integers up to a fixed positive integer "n." So far so good, now we need to prove it's true for "n+1." This is where weirdness happens, I have seen a hundred assignments where an equation is reduced down to "0=0" or "1=1" and they then write "Q.E.D" even though the fact that 0=0 was not under contest.
By induction, suppose there are only n=1 horses. Obviously they are all the same color.
Suppose we had proven the statement up to n. For every n+1 horses, the first n horses must be the same color. The last n horses too, must have the same color. Clearly all of them have the same color. QED.
I'd guess it's probably because they don't actually understand proofs in the first place. There are a lot of computer science undergrads who go in with math only up through calculus somewhere and get totally lost when it gets to this weird wibbly-wobbly place where you have to make logical statements using previously unknown Latin notation instead of just solving for x.
Wait, I am missing something here. If the reduction to a true statement is done through operations that can be performed in both direction (which are most of the trivial operations undergrads use) then this is a perfectly fine approach, right?
I have a feeling that the problem is students show "If the equation is true, then 0=0" instead of "Note that 1=1; now, we derive P".
Even if all facts used in the proof are true "in both directions", it's still a serious breach of modern mathematical style to start with what you're trying to prove.
It's also possible that they don't explicitly note that the facts they're using are "true in both directions". edit: to be explicit, in that case I'd still consider the proof wrong
Anyways, if a student presented a proof like that, I would at least take a couple points off for the awkward style unless it was explicitly justified somehow.
It's also kind of a weird proof technique to start with 0=0 or 1=1 and bother to explicitly state this fact...
From reading math research papers, it is not that uncommon to start with what you are trying to prove and proceed with a series of reverisible operations. This is gennerally done to as a first step to convert the proposistion into something that fits more naturally into the proof. This also tends to be done in prose.
It saddens me that so little of what goes in in math papers makes it into textbooks, or math students that do not study original papers themselves.
I don't see any pressing need to freshman to write their very first non-euclidean proofs in the same style as professional mathematicians.
Obviously, it's a matter of style. I still maintain few mathematicians would write canonical discrete math style induction proofs in reverse order.
But then, a mathematician would totally not write out induction proofs the same way we teach in freshman discrete math courses.
So in some sense the style question is completely irrelevant, and what matters is that the student's answer demonstrates unambiguously an understaning of the concept.
Like I said above, induction can be used to prove facts which are not equations. In fact most equations are best proved without induction, at least in my opinion. I was getting this argument even when the fact to be proved was not an equation, it is an affirming the consequent fallacy.
If the fact to be proved is an equation, then this could be conceivably a reasonable approach - assuming it is correctly written. Our class was a math class, so we weren't just giving them equations that hold for natural numbers (which usually have a more illuminating non-inductive proof anyways).
I can't now remember the specifics of a particular paper because I have seen so many, but the main thing is they believed that if they arrive at some fact that happens to be true then their original hypothesis must be true. It is a classic affirming the consequent fallacy, just shrouded in lots of sophisticated symbol manipulations.
Depends on what you mean by "perfectly fine" ... if the student understands that they deduced the desired conclusion from the true fact 0=0, by a series of steps of the form "our current statement is implied by this next statement, which is implied by the next, which is implied by ... 0=0, which is known to be true", then I guess it's ok other than being an obfuscated/roundabout proof technique.
I studied mathematics as an undergraduate and found that these topics aren't covered in pure math classes typically until students are sufficiently experienced that they can grasp the fully technical definitions.
It's more common for CS courses rather than math courses to rush the definition of Big O out the door to students who will struggle to understand the formal definition.
First of all, your statement is meaningless. What exactly does it mean to "know math"?
Here's an alternative explanation: math students do lots of computations, and sometimes make mistakes which they would be able to identify immediately if someone told them "you made a mistake on this line".
Kind of like how all programmers occasionally drop a paren or a semicolon. Does that mean no programmers "know programming"? Of course not.
In fact, if you've never made a stupid mistake programming, odds are you're not a very good programmer because otherwise, you'd have done enough programming that one of these mistakes became inevitable.
The only difference is that most programming is simpler than university-level mathematics, so with a few decades of research we figured out how to use computers to prevent us from making a lot of mistakes without impeding our productivity too much.
For example, from the article:
"In fact, the great mathematician Leonhard Euler published a computation similar to this in a book in 1770, when the theory of complex numbers was still young."
So another way of wording your post: Euler doesn't know math.
Similarly, many of the examples (e.g. calculators) could be interpreted as students knowing too much math, and excepting their tools or others to know the same "math" they know.
Because the optimal line length for readability on a monitor seems to be between 50-95 characters per line[1]. On my 1920x1080px screen, I'm seeing about 300 characters per line in this article.
On most platforms, it's relatively easy to resize your browser to whatever width suits you best.
Personally, I tend to find columns wider than the 95 character thing easier to quickly scan - I had this site set to about 130 chars. When someone fixes the width of the text on the website, there's no sensible option I have to widen it to my preference.
Right - I can resize my browser so that the web page is most readable to me, because it defaults to full width. I can't resize my newspaper or magazine, so they don't default to full width.
From my experience, I'm one of the very few people who write manually in a serifed font; in particular, I write 1, l, I, and | very distinctly and use a slashed zero so as to distinguish it from the letter O. I wish more people would do this since it is still sometimes necessary to communicate on paper, and trying to interpret handwritten notes with ambiguous writing can be quite annoying.
Unquestioning faith in calculators.
The same can be said for computers too - except the problem is often worse as computers are far more complex to the point that almost no one understands them fully, and we are encouraged to accept this.