Hacker News new | comments | show | ask | jobs | submit login
Introduction to Differential Equations (2008) (lamar.edu)
387 points by peter_d_sherman 6 days ago | hide | past | web | favorite | 85 comments

DiffEq seemed like black magic to me when I took it as a freshman in college. They basically just taught us the recipe bag for solving equations in different shapes, but very little insight.

When I started a gamified music discovery company, I actually ended up using DiffEq to define a scoring algorithm that would produce continuously varying point values based on time series input data. I had to relearn how to do it, but the concepts made far more sense with a real application.

Also, what they usually don't tell you is that the recipe bag only works for toy problems. For real applications you most often need numerical approximations.

You can do quite a lot simplifying to get equations that are solvable. A lot of engineering techniques do this, using simplifications that capture the essential behavior of a system (in fact I'd argue that knowing when such approximations are appropriate is one of the core skills of an engineer).

That said, even if you need a numerical solution it will still often require a lot of simplifications in order to be tractable. Multiphase fluid flow, for instance, relies on tons of physics simplifications and empirical correlations in order to make numerical techniques viable.

Or you learn how to make the toy problem that has the same behavior as what you are trying to model. If you don't understand how to simplify problems, you end up just chucking the computation at it and calling it a day. If you have a solution to the basic problem you can give yourself a warm start.

One of my professor used to say that you can do derivatives of every functions, it's mechanical, but you will only be able to find exact solution of an infinitesimal fraction of diffeq, and thinking about that, the same apply with integrals.

Yes the same applies to integrals. But when you go from 1d to 2d or 3d problems, the analytical approach quickly becomes too difficult.

If you consider analyzing an RL/RC circuit as a "toy problem" then I guess you're right.

Linear circuits are mostly analyzed using the Laplace transform, i.e. in the s-domain, where the differential equations are abstracted away. In the time-domain, simulators are still used most often. But yes a really simple circuit like RC/RL is usually done on the back on the envelope, but then you're talking really simple.

The problem with the analytical approach to differential equations is that it doesn't scale well, and you don't know beforehand whether the approach will work, so you might as well use the numerical approach from the start.

Laplace transformations are differential equations so I fail to see your point. They're just in a different domain. However I do see your point with numerical methods since most complex problems are simulated anyways through simulating software. So in essence, the application becomes pointless because its at such a higher level of abstraction that you don't even have to think about it. You just punch in some numbers and hit analyze and the computer does it all for you.

> Laplace transformations are differential equations so I fail to see your point.

What I mean is that typically an electrical engineer will convert L and C elements to complex impedances (which depend on the frequency through s), and will then compute as though the elements are ordinary resistances. The expression "d/dt" isn't used in the entire analysis.

See: https://en.wikipedia.org/wiki/Phasor


> the phasor transform thus allows the analysis (calculation) of the AC steady state of RLC circuits by solving simple algebraic equations (albeit with complex coefficients) in the phasor domain instead of solving differential equations (with real coefficients) in the time domain

This is like saying that if I convert mph to m/s then it's not a speed anymore. It's still a differential equation, just in a different domain because you can convert back from the s-domain into the time one.

Yeah, it's relatively dry material which is hard to grasp without the context of why we need it and how it can be applied in the real world. While the intro is good, it still has the same problem - just shows (in a good way) "some math".

I think the calculus of variations might be a better approach to introducing ODEs in first year.

You can show that by generalizing calculus so the values are functions rather than real numbers, then trying to find a max/min using the functional version of dy/dx = 0, you end up with an ODE (viz. the Euler-Lagrange equation).

This also motivates Lagrange multipliers which are usually taught around the same time as ODEs. They are similar to the Hamiltonian, which is a synonym for energy and is derived from the Euler-Lagrange equations of a system.

Of course you would brush over most of this mechanics stuff in a single lecture (60 min). But now you've motivated ODEs and given the students are reason to solve ODEs with constant coefficients.

You don't need a Lagrangian to invent mechanical ODEs. You could talk about mixing tanks, objects under complicated forces, and so on with a lot less background information.

We had a great professor and this was one of the most enjoyable classes I've ever taken. One particular assignment was a group paper where we were supposed to essentially explain and use the SIR model. We extended the model to an SIRZ model and effectively argued that zombie apocalypses in fiction are essentially impossible unless they include some supernatural elements. Under a wide range of assumptions (e.g., zombies rot/zombies don't rot) the infection always stopped before it spread significantly. (We got an A.)

I use zombies in my epidemic modeling class - one of my favorite results is, with a semi-complex model, you can replicate the script of most zombie movies mathematically (lots of people die, the survivors take shelter somewhere and are safe for awhile, attrition starts to take hold, things collapse and then you're left with a small surviving fraction of protagonists at the end).

You need to learn the math before applying it. You don't apply things that you don't understand.

There are certainly people who can learn that way, but it's not effective for most people. Most people have a limit to the amount of abstraction they can operate under before they need a tangible connection. Once that connection is made, most people can continue on to higher levels of abstraction.

The problem is math is entirely abstract. So you must learn the abstraction before applying it. Otherwise, you learn how to add 2 + 3, but you don't learn how to add n + m.

That strikes me as a strange way of looking at it. Most math taught to people who aren't pure mathematicians is taught precisely because of application to concrete situations. We value pure math largely because of the potential for future concrete applications.

In case you aren't aware (and I wasn't until I trained to be a teacher, so this isn't meant to be condescending), there are alternative methods of teaching besides abstraction-first. See https://en.m.wikipedia.org/wiki/Inquiry-based_learning

I need the concrete before the abstract just so my mind knows that what I’m seeing is not BS. Because in finance, 95% of the math is BS formulas that have no connection to reality.

"Ten lessons I wish I had learned before I started teaching differential equations" is relevant here. I feel that DiffEq was the most useless undergraduate course that I took for my comp sci degree. They really didn't spend enough time going into the fundamental concepts so that I am not even sure I could recognize a differential equation if it were staring me in the face at this point... much less any of the tricks that they taught us to solve them.


What can we expect students to get out of an elementary course in differential equations? I reject the “bag of tricks” answer to this question. A course taught as a bag of tricks is devoid of educational value. One year later, the students will forget the tricks, most of which are useless anyway. The bag of tricks mentality is, in my opinion, a defeatist mentality, and the justifications I have heard of it, citing poor preparation of the students, their unwillingness to learn, and the possibility of assigning clever problem sets, are lazy ways out.

In an elementary course in differential equations, students should learn a few basic concepts that they will remember for the rest of their lives, such as the universal occurrence of the exponential function, stability, the relationship between trajectories and integrals of systems, phase plane analysis, the manipulation of the Laplace transform, perhaps even the fascinating relationship between partial fraction decompositions and convolutions via Laplace transforms. Who cares whether the students become skilled at working out tricky problems? What matters is their getting a feeling for the importance of the subject, their coming out of the course with the conviction of the inevitability of differential equations, and with enhanced faith in the power of mathematics. These objectives are better achieved by stretching the students’ minds to the utmost limits of cultural breadth of which they are capable, and by pitching the material at a level that is just a little higher than they can reach.

We are kidding ourselves if we believe that the purpose of undergraduate teaching is the transmission of information. Information is an accidental feature of an elementary course in differential equations; such information can nowadays be gotten in much better ways than sitting in a classroom. A teacher of undergraduate courses belongs in a class with P.R. men, with entertainers, with propagandists, with preachers, with magicians, with gurus. Such a teacher will be successful if at the end of the course every one of his or her students feels they have taken “a good course,” even though they may not quite be able to pin down anything specific they have learned in the course.


I feel like Vladimir Arnold echoed a similar sentiment, but I can't find the quote. He was complaining that a first course in ODE and PDE tends to teach methods for finding exact solutions, even though equations with exact solutions are vanishingly rare in practice.

Or maybe it was Rota again, who knows?

Edit: nope, Rota again. Lesson one in your link :)

definitely in the spirit of Arnold

his book on ODE is otherworldly

the quote maybe comes from his pde book

The "bag of tricks" seems to allow to efficiently test a lot of people by writing tests and scoring them.

Unfortunately it seems to be more about the score than about the learning.

Using my own words, I'd say it was necessary though. One thing is the technique abstracted from the application, the other thing is a concept applying the technique. A fully abstract technique is awfully dry, so it's best taught connecting to other techniques. At that point, ODEs are a concept reduced to mathematics.

I found ODEs interesting only later when used in Linear Algebra, which could lead to differential algebra. But there are different approaches and not one is the true one, so what should an instructor do? You need calculus for ODEs, so much is true I guess.

Growth in hunter/food populations is interesting, too, plotting fox pop. over hare pop. Finally graphs can go in circles, not just in one direction along one axis.

I love #10, especially the last part. I would argue that even primary/secondary education is better served in this fashion.

Relevant to a recent Joe Rogan podcast with Neil Degrasse Tyson on how many teachers in your life inspired you. Not as much about the transfer of information, especially today, it's about the excitement and inspiration around the topics you are presenting.

"Such a teacher will be successful if at the end of the course every one of his or her students feels they have taken “a good course,” even though they may not quite be able to pin down anything specific they have learned in the course."

Basically the equivalent of, if the course makes you feel good, then the course is good. Not something that I'd advocate.

By the way, a differential equation is simply an equation with a derivative in it. If you can't recognize that, then you didn't go far enough in math.

Actually, I have degree in math from a reputable university. So while you are certainly entitled to think that isn't going "far enough in math", I do think that your opinion is in the minority here. If anything, I would say the fact that I have completely forgotten everything from that class is evidence I may have actually gone too far in math... or further than I ended up needing.

And I think it's evidence that the differential equations are not taught in a way that is beneficial for comp sci students (and other types of students too, but I can't speak to that). I took other classes that have not been applicable to my career after graduation - things like finite state machines, computability, and complexity theory. But I still remember a lot from those classes - due to their focus on fundamental ideas and proving things.

I've used state machines on the job. In fact, entire architectures are designed around finite state machines. I've also solved a complex logic problem a senior engineer couldn't solve by implementing K-maps.

And of course my opinion is in the minority because nearly everybody under the sun complains about how useless college is and how things should be taught with more application without realizing that things are taught minus application for a reason (so that you can apply things generally instead of specifically) and that many of the hot technologies are just re-purposed PhD research.

Also, the effort many students give to college is less than average (at least from personal experience going through a private engineering school) and probably for most college students. So their complaints are really just the result of laziness and lack of responsibility more than anything.

A minority opinion does not make it invalid or worth less, unless you have evidence to discredit it.

> I've used state machines on the job. In fact, entire architectures are designed around finite state machines.

My point wasn't that FSMs are useless. My point was that despite the fact that I personally have never needed to convert an NFA to a DFA in my professional career or program a turing machine, I still have a deep appreciation for those courses because they fundamentally changed the way I think about computation.

> I've also solved a complex logic problem a senior engineer couldn't solve by implementing K-maps.

While you are clearly very proud of this fact, I'm not sure why that's relevant here?

> everybody under the sun complains about how useless college is and how things should be taught with more application without realizing that things are taught minus application for a reason (so that you can apply things generally instead of specifically)

This is basically the exact opposite of my complaint. I was complaining that differential equation courses essentially focus on teaching a bag of tricks for solving specific types of equations. I'm sure that behind each of those tricks there is a very fascinating how and why that - upon deeper exploration - may have changed the way I think about numbers. But that certainly was not the focus of the class that I took.

> the effort many students give to college is less than average

So you're saying in a given population, many of its members will be less than average? Very insightful. If only I had gone further in math maybe I would be capable of such insights, too. :)

> So their complaints are really just the result of laziness and lack of responsibility more than anything.

Be careful with this line of thinking. You could say the same thing to discredit any attempt to improve the way a course is taught. But surely you must agree there is room for improvement, right?

> A minority opinion does not make it invalid or worth less, unless you have evidence to discredit it.

Given that you made no attempt to substantiate your opinion - it seems to me that the logical thing to do here is to side with the majority.

I'd like to write a more algebraic understanding of differential equation, e, derivative and integrals. Something like Maxwell equations (pardon the pompous aspect) but for all things differentials. There's a webpage about differentiation that starts that (and is the inspiration behind my quest).

> They basically just taught us the recipe bag for solving equations in different shapes, but very little insight.

This is why I just dropped off my DiffEq class. It was optional anyway, but when I go to a university level math class I expect insight, not rote memorization.

The algorithms you would learn for graphs etc are also just math and equally abstract. What makes the difference in how real you treat the two concepts that are both abstract but generally applicable?

Not the OP, but I've had good and bad math teachers. The bad ones tend to teach rote steps, "do this, do this, do this, done," without any attempt to explain why things work the way they do, without drawing parallels to already-learned things, without trying to teach any _why_.

Then you hear students asking, "when am I ever going to use this?"

My good teachers, on the other hand, always tied what we were doing into a larger scheme. If there were similarities or other relations between concepts, they'd be pointed out. If someone wasn't 'getting it', they had other ways of looking at it at hand, would sometimes give alternate methods of doing the same thing, etc.

In short, one tells you to memorize in a vacuum for no good reason. The other helps you learn.

This has a huge impact for me.

I didn't get calculus. It was a disaster for me.

Now I'm a mathematical epidemiologist. Why? Because someone introduced me to the grander scheme of things.

But I would think the same applies to your algorithms teacher. There should be just as many good and bad ones there. Yet it seems different level of what is counted as too rote and unusable.

Depends on how they're taught, but to be honest, at a certain point in algorithms class, the subject matter also became tedious and abstract beyond recognition.

Tangental anecdote: Every time I see Diff EQ mentioned the first thing that pops into my head is the number 11. That's the score of my first, last, and only Diff EQ test. 11%.

had you studied at all? most undergrad differential equations classes are fairly mechanical in nature, you just learn to identify the type of problem, then you follow the steps exactly as they are written in the textbook, super little variation or freedom

This is often more true than many believe. But in my experience few students get so far that they recognize the patterns and can respond in the way you describe. That might be the crux.

I remember my undergraduate mechanical vibrations class. Every exam was basically a test of how well you could do the Laplace transform on some linear ODEs. I memorized the most common transforms, so this became fairly straightforward and fast for me, but it was obvious the other students were struggling.

If I had a problem that wasn't solveable with the Laplace transform, say a linear ODE with variable coefficients, I'd likely take longer to do the exam, but those never appeared in the class.

I think I got 19% which put me well in the top half of the class...

Does anyone know websites or resources which explains how Diff Eq is used in Computer Science? I know it's used in a variety of areas in CS, but I really like to see or read well-explained tutorials or articles. e.g. what's finding area got to do with the topics in CS? what does 'area' correspond to?

It's essential in computer graphics or in computational geometry and in simulation science, where I can speak from experience - these are more intersectional topics than pure CS I think but I'd say it's applied CS.

E.g. in finding faster ways to do Ray/surface intersections (if we're talking about actually industrially useful geometry like all kinds of splines and not just triangle meshes), differential geometry is essential - even with triangle meshes you can apply it in normal and curvature estimation. Differential equations and integrating them enter the picture if you want to find the shortest way from one surface point to another along the arbitrarily shaped surface.

With simulations differential equations are everywhere because any physical system as a function of space (and time) is a collection of differential equations that you need to solve.

Earth-movers distance and the Wasserstein metric have recently got attention again, its original relevance was in the Monge-Ampere problem, how to distribute a continuous distribution of 'heaps' of some kind into a distribution of 'sinks' with the least amount of total distance moved. Which is a nonlinear partial differential equation to solve in two dimensions.

We need to apply numerical methods, nonlinear optimization to solve such problems and CS is a part of doing that quickly. Because there are no general closed form solutions for most of the systems of differential equations or there we need algorithms to solve them approximately.

Not really Computer Science per se, but practically every engineering field uses numerical solvers. No one reallistically solves nontrivial differential equations by hand these days.

Not so much in engineering, but in science, approximate perturbation techniques are still a big deal.

I took DiffEQs (the only 5-credit course offered, which I thought rather odd), of course, on my way to my CP degree at FIT in central Florida, and the one thing I remember was that they turn out to be very useful in biological simulations.

As I remember, population ebb-and-flow based upon available resources is something it was especially good at estimating and tracking over time, so I would imagine that any of the Sims-type games they would be quite useful.

It turns out they are damn good at estimating, over time, many things, so a little bit of research and the right game and it's not hard to see how the two could work well together.

Discrete difference equations show up everywhere, and are basically the same stuff. Knuth-Oren-Patashnik's Concrete Mathematics is great

Any type of graph you have, you can use it, just like integrals, to calculate areas etc.

For a lot of things in life we plot graphs. Your software might do it transparently to you, but how it works internally is using mathematical concepts. It is good to know it and I really enjoyed learning it at college, as it opens your mind about how things work. But I don't believe it is a must to know.

I'm also interested in this, it comes up in a lot of EE courses (DSP, Control theory), but I've yet to see it used in a CS course.

I think the schools spend far too much time on symbolic differentiation and integration. This limits the exercises to the kinds of toy problem that yield to those methods. Kids get sidetracked on solving anti-differentiation puzzles, while the fundamentals are relegated to those (largely useless) puzzles.

After 20 years of engineering--in almost every case--numerical methods have been the only way forward. In hindsight, a year-and-a-half long course to convey the fundamentals seems excessive.

While numerical methods are absolutely critical in practice, analytic methods like you learn in what people call calculus and diff-eq are _absolutely_ essential to understanding the physical world.

You can't actually _understand_ numerical methods without a fairly deep grounding in analytical methods.

The real problem is here is a lack of context. Engineering and most science curriculums take a "short-cut" through mathematical education. They try to teach just enough math to get through the major coursework. As a result you end up with students who feel it's all just one big memorization trick .

> analytic methods ... are _absolutely_ essential to understanding the physical world

How so? My experience has been that the "physical world" is where the symbolic approach completely breaks down.

> Engineering and most science curriculums take a "short-cut" through mathematical education.

Only people taking more math than scientists and engineers would be mathematicians. A year-and-a-half course to cover the limit, tangent-at-a-point, functions of tangent-at-a-point, area-under-the-curve, and generalizing all of that to higher dimensions doesn't seem like much of a "short-cut" if you ask me.

    > "physical world" is where the symbolic approach completely breaks down.
It does not "break down" it just becomes intractable in certain cases.

One needs to be able to solve problems that have all but the most essential details stripped out in order to develop a sense of how physical law actually works. Many times that is even "good enough" to get to a solution.

The best way to do that is through analytic methods, which give not only "an answer" but also tell you important features of the answer. These analytic solutions have "handles" you can use to ask "what-if" questions -- eg zero's in the denominator to indicate poles, behavior of the system as you take certain limits, geometric aspects such as symmetry, patterns in recurrence relations, etc, etc, etc..

I would posit that the reason so many people wipe-out in undergrad physics is that the coursework insists on pounding the square peg of law into the round hole of analytic methods.

Something most people in the STEM fields refuse to acknowledge is that throwing away information complicates things just as often as it simplifies them.

I say that if the ball doesn't bounce forever, the equation should reflect that.

It entirely depends on your goals. If you want to be a productive engineer, then most of that stuff is not going to be useful in your job.

OTOH, if you study physics at an advanced level, it's rather shocking how effectively all that analysis models the world, despite throwing away a lot of information. Try studying solid state physics. It's crazy the number of assumptions they make, and yet the theory still produces very accurate results.

There's a reason Eugene Wigner penned an essay with the title "The Unreasonable Effectiveness of Mathematics".

Word up for Mr. Wigner !!

If you ask any STEM professor or TA that teaches undergrads, the reason so many students "wipe out" is because of a lack of preparation in fundamentals-- not just "calc 101", but even more basic than that, algebraic manipulation.

The material in a physics 101 course is just the barest minimum and it goes beautifully hand-in-hand with calculus 101.

That is only if your allowing yourself to take the numerical methods as a black box. If you want to be sure that your method is okay, you prove error bounds as you take limits of 0 mesh size. Otherwise it is just building tables of black magic for what schema to use when.

That's kind of where I was going with this. Instead of burning all of that time on symbolic differentiation, dig-down into numerical methods ASAP so students can get a feel for all of the related "gotchas"--of which there are many...

edit: IMHO, many of those "gotchas" are much more interesting than the fundamentals of calculus.

Consider symplectic integrators. You would never come up with them or realize the problem of energy drift if you hadn't first paid attention to the fundamentals of the geometry and calculus underlying the problem.

This is just my favorite example, but it illustrates how understanding the fundamentals also explains the gotchas. Just getting a feel for them through experience is again just black magic by building up a table of what to use when without the generalizing principle behind it.

Never said to do away with the fundamentals. Will say that most symbolic differentiation and integration (which is a big chunk of the coursework) is not fundamental as much as it is fruitless busywork.

Even so, I spent a year of my time--and God knows how much of other people's money--grinding out the mathematical equivalent of crossword puzzles so I could get my job certificate--just like every other engineer.

Use that same time to apply the fundamentals to numerical methods, and you get to go in far more interesting directions--like symplectic integrals, or chaos theory.

>Never said to do away with the fundamentals. Will say that most symbolic differentiation and integration (which is a big chunk of the coursework) is not fundamental as much as it is fruitless busywork.

It's no more busywork than being able to multiply two single digit numbers in your head. Whether it's useful to your job really depends on the job. I had a job once in the engineering industry. When we were in meetings discussing projects, if you could not do those types of analyses (e.g. asymptotic behavior of certain Calc II type integrals) in your head, you would not know what's going on. Sure, everyone could explicitly show all the steps for your benefit, but you'd be slowing everyone down.

Theoretical fluid dynamicist here. I solve a lot of not nice differential equations exactly or approximately.

Whether numerical methods are viewed as the primary way forward is a bit of a self-fulfilling prophecy. If you don't think analytical solutions end up being useful, you probably won't put in the work needed to generate them in the first place, so you never see the value.

Even if you go all in with numerical methods, you need to test your code. This requires an exact solution and knowledge of the convergence rate of the numerical scheme. The exact solution can be for a special case that is easy to solve. You might need multiple exact solutions to cover all the physics. You can also use techniques like the method of manufactured solutions, but if you don't like analytical methods you'd probably hate that.

You need to check if the empirical convergence rate matches the theoretical one. In practice this is rarely done, but it's essential towards eliminating bugs. So you can't entirely avoid exact solutions if you want to do purely numerics right. This was not covered in my first differential equations class, unfortunately, but I think it's an essential topic.

Exact solutions are often impossible, but less so than most people believe. I've produced exact solutions many times to equations people thought required numerics. The exact solutions are very valuable by themselves, as they can be used much faster than numerical solutions in most cases and allow you to see the structure of the solution. I think you should always try hard to make an analytical exact or approximate solution. It might be rare that you can do it, but the value is large and if we stopped teaching these methods it would become much more rare.

As for you mentioning in another post the problem of "pounding the square peg of law into the round hole of analytic methods", you should learn about approximate analytical solutions, which give you a lot more flexibility. You still ultimately have the same problem, though.

The course overview is here[0]. I haven't looked at how in-depth the later sections are, but I can't think of a topic related to differential equations that I used in my undergrad physics degree that isn't at least touched upon here.

[0]: http://tutorial.math.lamar.edu/Classes/DE/DE.aspx

It doesn't include Green's functions (I don't really know what that is but some other undergrad diff eq books have them).

Why is this on top of HN? There are so many websites teaching introductions to differential equations. Is there something interesting about this one in particular?

They're pretty much the best calc notes I've seen, was very useful a few years ago

Paul’s Online Notes are a pretty well-known calculus and differential equations reference. IMO they’re pretty decent.

Paul's Online Notes is my childhood. There's something nostalgic about seeing this submission.

Can anyone compare this vs other resources to learn differential equations? I want to learn math roughly to the level of an undergrad engineering student, so I've looked at some Advanced Engineering Mathematics books (one by Zill, another by Kreyszig), both have mostly good reviews, and to be honest, Paul's Notes seem almost a level above in clarity and understandability. For example, compare the explanation of integrating factor and exact equations. The books typically explain exact equations via a total differential, which I'm kind of confused about... why not just say that the left part of the equation is a total derivative?

I can't decide whether to continue reading the Advanced Engineering Mathematics book or learn the topics it contains via Paul's Notes and other resources. My worry is that the reason Paul's Notes seem clearer is simply because they're more superficial.

What are some other good learning resources for advanced engineering undergrad math?

> compare the explanation of integrating factor and exact equations

those are fringe subjects, completely irrelevant for the modern usage of differential equations. They are useful only in computer algebra when you want to implement differential galois theory. In practice you want to understand the overall behavior of your system (qualitative theory) or compute particular solutions numerically (using numerical methods, which are more precise than evaluating the expression of the exact solution).

You'd do much better with a qualitative book about differential equations (e.g., Arnold), about numerical analysis, or about dynamical systems (e.g. Strogatz).

Integrating factors are important motivations in the design of some numerical algorithms. Some keywords: matrix exponentials, semigroup theory, exponential integrators.

Does anyone know any website similar for advanced linear algebra and probability with practice problems explained with detailed solutions step by step? I know there exists other great textbook like Strang's, but I often find textbook based learning resources lacking because of lack of detailed solutions.

Not really a website, but I believe you can find ebooks online...

I highly recommend Numerical Linear Algebra by Trefethen. It gives very detailed descriptions of particular interpretations of the singular value decomposition and eigenvalues, it works out detailed algorithms for LU factorization, eigenvalue/eigenvector decomposition, QR factorization etc. If you know basic linear algebra, the book is a pleasure to read through. For this crowd of people it is also very practical.

I don't know a good resource for probability... it is a much more diverse subject than linear algebra (which is a very small, very detailed subset of algebra).

Not a web site, but the book One Thousand Exercises in Probability by Grimmet and Stirzaker is just what it sounds like. 1000 practice problems with solutions. It's a companion book to their text book Probability and Random Processes which I think is one of the better introductory textbooks to probability theory

I took Cal 1 and 2 directly from Dr. Dawkins back in 2010-2011. He was the best teacher I had in college hands down. Totally changed the way I thought about math and I've been in love with it ever since.

DiffEqs... the only college class I ever got a C in!

Oh wow, this site was an absolute godsend in college. I was stuck in a DiffEq class with an abysmal professor and barely hanging on. I found this a few weeks in, stopped bothering with useless lectures, and went from low-60s to high-80s over the rest of the semester.

First and second order diffeqs with constant coefficients are a big deal, you should at least learn that. You can solve them algebraically via Laplace Transforms, the theory is very beautiful.

used these sites all of the time during engineering. improper integrals, diff eq, etc.

these are the best notes ever if you missed class or a concept.

Wow, it's been a while. Does anyone actually use this stuff after college???

This is why people suggest that terms like 'software carpentry' more accurately capture what 99.5% of programmers do than 'software engineering'.

Yes? Very common in Engineering disciplines, for example.

IMO that's a well made question, I think it's good to ask to.

https://news.ycombinator.com/item?id=18182657 is an example ITT WRT computer game writing.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact