Hacker News new | comments | show | ask | jobs | submit login
Calculus with pics and gifs (0a.io)
188 points by archibaldJ 940 days ago | hide | past | web | 37 comments | favorite



I haven't read the entire thing, my barometer for deciding to read the whole thing was the section on limits. The intuition the author tries to develop is very wrong. The whole section on the limit being the smallest impossible number to reach is more misleading than it is instructive. It's dead wrong for probably the most trivial kind of function, the constant functions. If f is a constant function, it's limit (as x approaches any value) is not only not an impossible value for f to obtion, it's the only possible value! Ignoring constant functions, it breaks down with even trivial examples, e.g. the limit of x sin(1/x) as x approaches 0. The limit is 0, but 0 is not impossible to obtain as x approaches 0, in fact f(x) = 0 for infinitely many x in any arbitrarily small neighbourhood of 0.

The real idea is much simpler than "what's the biggest impossible number", it's "what number am I getting closer to."

Also, there wasn't even an example of when a limit doesn't exist, that's quite a significant omission. The function f(x) = sin(1/x) is a good contrast to the previous function; in this case, the limit does not exist as x approaches 0, and it's easy to see here that it's because f(x) doesn't get closer to any value, it keeps oscillating wildly between -1 and 1.


Hi OP here. Thank you so much for pointing that out. That was a big mistake I made while trying to bring readers to a different way of viewing limit, which can be useful as an introduction to the (ε, δ)-definition. Yes, limit shouldn't be defined this way, which was the reason why I used the expression "can be viewed" rather than "is" at the start. But I got too carried away and forgot to clarify that there're cases when we shouldn't view it such way.

So I would like to thank you once again for letting me know about this huge mistake of mine that I failed to notice. I was busy and didn't see your comment until now. I have just added clarification for that in the article.

The reason why there's so little about limit in this article was that I intended to go through the limit section as quick as possible and get started on differentiation and integration, which are what most readers are on my page for. On a second thought I decided to at least pen down the (ε, δ)-definition, and so I added the blockquote with the "biggest/smallest-impossible" analogy which unfortunately ended being the last thing you saw in my article before you closed your tab. I'm actually planning on writing another article, one specially for the concept of limit, covering topics from continuity, existence of limit (as you mentioned) and one-sided limit to sequence and taylor series to limit point, neighborhood and a bit of topology.


The other thing your missing about limits is functions like tangent where they approach different values on each side of a value.

The reason limits are introduced in calculus is you need a Continuous function for the basic assumptions that allow integration/differentiation to work. Basically, the limits for all points you care about need to agree or you can't take the derivative. In other words f(x) ~= f(x + 1/ infinity) ~= f(x - 1/ infinity) for all x.

http://en.m.wikipedia.org/wiki/Continuous_function

http://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calcul...

PS: I still like the overall presentation.


Yup. That is continuity and one-sided limit which I would cover in the next article. I actually mentioned that in the comment above.

Other things that you would find in a lot of calculus textbooks but I didn't cover are complex function, the application of calculus (e.g. in Newtonian physics, in optimization problems) and stuff like L'Hospital's Rule, Squeeze theorem, etc. I didn't want to get into too much of the details in this article because my plan was to be concise and get straight to the points. I want to write it in a way that anyone who is new to calculus can grasp the concepts and have an idea of what calculus is about in the shortest time possible. On a side note, most of the functions they will be dealing with are elementary functions, which are continuous over their domains.

And thanks.


its limit or did you mean it is limit?


Related: Better Explained, a fantastic "intuition-first math" website, has an excellent Calculus primer. The book is free to view, and there are paid-for video courses with it.

http://betterexplained.com/calculus/


>>> A limit is the number you are "expected" to get from a function (or algebraic expression) when it takes in a certain input. By "expected" it is referring to the expectation of the output when x "approaches" a certain value.

>>> Limit can be viewed as either the biggest or smallest impossible number for a function to output, when you put in numbers that are slightly smaller or bigger than what x is approaching.

I realize this is intended to be an elementary discussion of calculus, but both of these definitions made me cringe. The second seems closer to the formal definition that I remember learning. And the formal definitions might not be necessary for making practical use of calculus. But I would call them explanations or analogies rather than definitions.

I taught college freshman math for one semester. The idea that the right answer is the right answer because it's what the teacher "expected" is a strong misconception that my students somehow formed while in high school


Just for the record, the (ε, δ)-definition of limit is later mentioned as one way of formally defining limit after the analogies.


Ah, that's cool. I missed it.


Maybe a nitpick, but in this image: http://0a.io/assets/img/1d_2d_3.png

The red triangle should be above the line so it touches the X-axis, as the "area under the curve" should always be towards the X-axis, not the Y-axis. The picture as is just happens to give the right answer for a straight line going through the origin, but is illustrating the incorrect reason why.


OP here. Thanks so much for pointing that out. No wonder I'd got this strange feeling every time I glanced through that part. I've just got it fixed.


> What is a function?

> A function can be seen as a machine that takes in value and gives you back another value. It is what we use in maths to map numbers (input) to other numbers (output).

That's absolutely wrong (mathematically). A function is a relation between two sets, where one value from the first set is related to at most one value from the second set.

It's kind of weird to read discussions on how "functions in programming languages are weird because they're not functions in the mathematical sense", but then you see how for most people a function denotes a machine that performs work in time (all physical concepts), rather than the idealised mathematical definition. Doubly weird when the author is someone trying to explain calculus and has presumably sat through a lot of high-level mathematics classes.


Why is this "absolutely wrong"? This is a natural, useful and benign description of what a function is. It doesn't contradict the formal set-theoretic definition, which came long after mathematicians started talking about functions, and which isn't required at all to conceptually understand what a function is, just as nobody needs to know the Peano axioms to understand what the number 5 is.

I don't see any harm in saying that a function is like a machine.

Edit: And if really want to be pedantic, then we might as well say that a function is not a relation between sets, but just a relation (with the requirement that no element maps to two different elements), and therefore can be between proper classes as well (hence we have functors between categories). Also - there could exist other ways to axiomatize the notion of a function. They point is that the formal definition of a function is just a tool, not some metaphysical essence of what a function is.


Hi OP here. Thanks for stating that. You are absolutely right. It isn't a technically correct analogy. I apologize for that. When I was writing the article I didn't want to get into the details for the non-injective surjection that is not reversal and that a function is merely relation where we shouldn't introduce the element of "time" by thinking of it as a process. I intended to go through the function section as quick as possible (the same time in a way that readers can easily form a mental image of it), and get started on differentiation and integration, which are what they are on my page for.


Seconding this. I've had this discussion a number of times and even when people initially reject the "relation" definition, if you push them with the Socratic Method eventually they get to a point where they see it really is the only good way to define functions, regardless of how "laymen unfriendly" relations seem.

The 'machine' "definition" perpetuates as an idiom, with teachers who don't know any better teaching it so their students don't know any better, and it does great damage to math education.


Probably showing the damage it does will help people understand it better?


What's wrong with thinking of such a relation as a "machine that takes in [a] value and gives you back another value"? This seems to me needless pedantry.


Nothing wrong per se - both definitions are right within their domains, i.e. programming¹ vs mathematics.

It's just that I keep hearing (usually from FP people) how procedural/oo/impure functional/what have you programming languages should stop calling their functions functions, because they're not functions in the mathematical sense and therefore confusing; but I have the gut feeling that the programming definition (machine with input and output - X comes in, Y comes out) is the actually more intuitive one, and here we see someone explaining mathematical functions using the programming definition. Ultimately, I suspect that for computable functions, there is no difference between the definitions.

So I found it interesting how here the "wrong/confusing" definition is used in place of the "correct" one.

¹ (edit) I mean programming as the engineering discipline of programming computing machines, not in the sense of discrete mathematics.


That machine can only handle computable functions (http://en.m.wikipedia.org/wiki/Computable_function)

Although many reasonable choices are equivalent, you should also be more precise in what you mean by 'a machine'. If your machine is a regular expression matcher, for example, it cannot determine whether an arbitrary string contains matching parentheses pairs (http://en.m.wikipedia.org/wiki/Chomsky_hierarchy)


"Machine" is an abstract term, not limited to Turing machines. Somebody who is just learning about functions has no understanding of what "computable" means, nor why should something be "uncomputable" according to some specific definition of a machine. We can invent in our minds all kinds of machines that don't exist in the real world: Oracle machines have existed in recursion theory since the time of Turing.

If f is a function that tells us if a given Turing machine halts on a given input, there is nothing wrong about imagining this as a machine that receives a string and outputs a True/False answer. That no such machine can exist in the real world under some physical interpretation of the Church-Turing thesis is just a further observation - it has nothing to do with the correctness or usefulness of this formulation.


I think it is because we accidentally introduced the concept of time, which is a foreign concept to maths (but not to us), when talking about "machines" and the process of "taking [x] and giving you back [y]".


The machine could spit out different values when the same input was fed in different times.


I think Gilbert Strang at MIT does a great intro with his "Big Picture" of Calculus overview. I really enjoy his starting with how calculus is a mapping between two related functions without getting into the mechanics of calculating them until later. https://www.youtube.com/watch?v=UcWsDwg1XwM


I don't think it's important that his narrative be perfect, as long as it's well-presented. There are thousands of calculus books on the market and nobody reads them because they all suck.

He may consider taking out the Will Farrell gif since it has nothing to do with Calculus.


Thanks! I totally agree with you. That is certainly one of the things that inspired me to try writing explanatory article on maths.

And well, that escalated quickly.


"Gottfried Wilhelm Leibniz, a great German mathematician, came up with this notation in the 17th century when he was still alive." That, and the definition of function made me go through it with different eyes. But very nice to see this approach anyway.


And as we can see, definite integration is more of a local operation, while indefinite integration is a global operation.

I think this was borrowed from here: http://math.stackexchange.com/a/20636 ? Either way, it'd be cool to link to this question in that "why integrals are hard" section as it has a bunch of great responses.


I don't believe it was borrowed as it is something that is definitely taught in courses. I remember learning it at some point early on in my mathematics education.

I definitely know I knew it by my vector calculus course (2nd year) where we use the various named theorems to translate between the differential and integral forms of Maxwell's equations.


Interesting. I'm helping my sister with Differential Calculus this semester. I made her calculate the rate of change of Sin(x) between different sets of points for her to see that it can have different values if the "traditional" rate of change rule is applied. It's then easier to justify the derivative concept of "slope at every point".


Pretty good. I think Larry Gornick's The Cartoon Guide to Calculus[0] is better. Has a lot of narrative elements as well.

[0] http://www.amazon.com/Cartoon-Guide-Calculus-Guides/dp/00616...


One way to understand limits is compound interest. Start with a dollar, compound 100% in year to get $2. Halving the interest rate and compounding twice gives $2.25. The "limit" of this less-interest-more-often process, ad infinitum, is e (2.718...). This introduces both infinitesimal and infinite.


Excellent example: e = lim_{n -> ∞} (1 + 1/n)^n

The concept of infinity (∞) is perhaps the most important new idea in calculus. Specifically, calculus is about procedures with infinite number of steps, or infinitely small steps.

The derivative is a slope calculation (rise/run) with an infinitely short run. The integral is a rectangles-approximation-to-an-area using infinitely thin rectangles, and series are summation procedures with infinite number of steps.

High school math deals with procedures with finite number of steps, whereas in calculus we learn to use infinity as part of our calculations. The reason why limits are important is because they allow us to make certain statements that would otherwise not be true:

    1/n ≠ 0 even if n is huge    
         but    lim_{n -> ∞} 1/n = 0 
    
    sum([1/2^n for n in range(0,N)]) = 1.999999... ≠ 2
         but    sum([1/2^n for n in range(0,∞)]) =  2
The equality in both of the above examples depends on (mentally) carrying out a procedure with infinite number of steps.

(examples taken from my math book)


the issue i take with the way most calculus is taught is starting with limits and derivatives. this is not the way humans discovered calculus. we discovered integral calculus (finding the area under a curve) a couple thousand years earlier. it is really cool that derivatives and integration are inverse operations, but starting with integration is in my opinion easier to grasp and derive formulas the student already knows: area of square, triangle, circle, etc. once that gap is closed, then introducing derivatives makes more sense.


Depends on the definition of calculus. Without the fundamental theorem (which absolutely requires differentiation), explicitly calculating those areas is unreasonably hard (undoable in all but the simplest cases, and in the cases where it is doable, you usually have to be Archimedes to do it). Sure, you could start the course with numerical integration, but this really wouldn't help anything else.

(You might be interested, though, to know that your suggestion is essentially how higher math is sometimes done. The natural logarithm function is defined by ln(x)=int_1^x(1/x)dx, which if you look closely doesn't involve exponentials in any way, allowing you to non-circularly define the exponential function as the inverse of the logarithm function, and once you've got the exponential function, the game is on)


Thoroughly enjoyable review. Haven't touched calculus/analysis since the mid 80s, that was a good read.


That is great.

If anyone knows of anything like this for set theory or probability/statistics, I would love to see them.


Good idea! If I can find time this quarter I'll give it a shot.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: