Hacker News new | comments | ask | show | jobs | submit login
Algorithms, by Jeff Erickson (illinois.edu)
1463 points by bdr 49 days ago | hide | past | web | favorite | 238 comments



Jeff Erickson was my algorithms professor in 2012. He exemplifies the articulate, passionate educator that I wish I had for my other CS subjects. I recognize many of these notes having read them many times in preparation for quite difficult exams - a fun anecdote shared among people who've taken the class is the 25% credit given on any exam question just for writing "I don't know", effectively a reward for acknowledging your own shortcoming and for saving the TA the time to decipher a bullshit answer.

Professor Erickson, if you're reading this, thank you for being the best educator of my college days and for making your beautifully-written notes available to everyone.


>25% credit given on any exam question just for writing "I don't know", effectively a reward for acknowledging your own shortcoming and for saving the TA the time to decipher a bullshit answer.

That’s brilliant, yet I’ve never heard of it. Should be standard scoring for written exams.


Random other point of brilliance I've seen: Our Organic Chem teacher (who was loved universally in the Program) had a rule about test corrections. If you wanted a correction to something you believed you should get credit on, he would only offer to regrade your WHOLE test, which meant you could actually get less points on the regrade because it was he and not a TA regrading (could have worked both ways). It really scared off all those one-off "Can I get an extra point here" requests in a 300 person class.


When I was a professor this is the way I would regrade too. I would regrade blind to the first mark so I didn’t bias myself. I don’t think I ever had a regrade that changed more than 5% and most were identical.

I would also grade all exams the first time blind to who the student was and then regrade a half dozen again to make sure that I hadn’t drifted out of grade over the marking process.

I used to play an game with myself (marking is very boring) as to who would get what mark based on my in class knowledge of the student (when you run labs you do get to know the individual students). It was a rare student that got a exam mark more than 10% outside my estimate.


This was the policy for any final exam at my alma mater.

You had to be rather certain that you were sufficiently wronged overall, otherwise a few points more on one question could be offset by a large number of negated points from other questions where the professor thought that the TA might have been too generous.


I think that was the way it worked at my university too, at least officially. Unofficially, very few teachers actually graded that way, and when I saw it happen, people complained about it a lot.

Then again, the teachers that did it seemed like they were doing it punitively (e.g. subtracting the exact amount of points they were forced to add.)


This was a common approach at my Uni. I only once challenged it, but the issue was an entire question wasn't marked at all, so the lecturer didn't remark the rest.

One other time, I asked the lecturer (different paper) where I went wrong on a question, and turn's out I was right all along, got that question regraded without the potential downside to a full remark.


I’ve never understood why that’s a good policy. You’re (vindictively) punishing the student for pointing out — albeit selectively — where the teacher, supposedly a bedrock of truth on the course material, has falsely labeled your “training data”.


I think you're running under the assumption that all students making the request are doing it for the right reasons.

I could be wrong, but my intuition goes the other way and I'm assuming most students do it to get a better grade, not learn more.


Yeah no, you don't get to say students should care more about learning than grades when jobs, opportunities, and scholarships given by the very university that claims learning is more important are riding on grades.

It is the professor's responsibility to grade accurately, and honestly if there is so much inconsistency in grading that there's a significant probability that the student will lose marks despite having pointed out a reason they should gain marks, the professor is screwing up very badly[1]. Don't pin that on the student not having noble intentions or whatever.

[1]:Or at least that's the case in STEM. My humanities friends tell me that grading is much more subjective there which is a bit disturbing but I'm hardly qualified to make such judgments.


As long as we're getting all realpolitik about it, the reason is that lots of students are assholes who will run roughshod over any professor who shows weakness. Source: was a student, watched it happen routinely.

Meanwhile, in a less politically charged situation, it's pretty normal, or at least smart, to go back over any area where you messed up with a fine-toothed comb.

If, and this is a rather pessimistic assumption, if the student is likely to lower their grade on a re-test, in my experience it's more likely to be because the original grader was being generous in the face of ambiguity, rather than systemic random errors. That's because the graders are mostly nice people. If that's not good enough for someone, they damn well better have a good reason for it. I have no problem with policy that enforces this. The politics are a distraction from the ethical question.


That policy would make sense, but that involves major caveats that aren't clear from the original description. That works because it's specifically written with "generosity buffer" that (by design) only benefits your grade, and thus doesn't indicate the teacher was falsely labeling the answers as correct when they weren't.

That's not the same thing as the original, which implied "my grading is so random, hope you're lucky on the rescore".

Edit: I had a high school teacher with a policy where she’d add X points to every score from the get go, and you’dlose those if you objected to grading, so you’d only object for misgrades by more than X. But that’s not the same as the original “okay, let’s roll my truth-recognizer dice again!”


Exactly. It's ridiculous that teachers are fine with such inconsistent grading on something they (and their graders) are supposed to be experts at -- let alone that they take such joy in leveraging such unhelpful inconsistency against students who objected to them propagating falsehoods!

It reminds me of those "drunk driving costs you $15k" ads, where most of that comes from the expense of navigating the court system (bail, attorney's fees), and the state is somehow proud of this fact.


>I think you're running under the assumption that all students making the request are doing it for the right reasons.

No, that's what the "albeit selectively" clause was referring to.

>I could be wrong, but my intuition goes the other way and I'm assuming most students do it to get a better grade, not learn more.

As in my comment, it doesn't matter. Their job is to be an expert on this material. Failing to correctly label an answer is failing at their job. Punishing someone for pointing out a failure at your job is unprofessional.


I agree. I've seen this done and it was effective. Combined with requiring the request to be made in person, during office hours, it also got struggling students to actually attend the office hours (which often was otherwise empty, and another waste of the grader's time).


The students I knew who tried to complain their way into a better grade would have been better off if they'd tried to study.


This is functionally equivalent to being given negative marks for getting answers wrong, which I've seen on multiple choice tests.


Only if no partial credit is offered, which is not how his exam[0] reads to me:

    As usual, answering any (sub)problem with “I don’t
    know” (and nothing else) is worth 25% partial credit. 
    Yes, even for problem 1. Correct, complete, but
    suboptimal solutions are always worth more than 25%. 
    A blank answer is not the same as “I don’t know”
[0] http://jeffe.cs.illinois.edu/teaching/algorithms/hwex/s18/fi...


I had a math prof who would give negative marks on proofs, preferring you say “I don’t know how to do this step” and finishing the proof over trying to BS the step.


A math prof of mine would label insufficient initialization in recursive proofs with i.i. (it also stood for a few other common mistakes, such as “incorrect integration” or whatever). And he would invariably add “i.i = -1”, which was the penalty for such mistakes (exams being graded out of 20). A deliciously nerdy joke, unless you’re on the receiving end.


How do you skip a step in a proof? You just say "assuming I can prove this assertion, this other stuff follows"? That seems to risk accidentally skipping way more steps than you thought you were skipping and quite probably the meat of the proof.


> How do you skip a step in a proof?

Well, from the math profs I've seen, the usual method is to interpose “it is intuitively obvious” in place of the skipped step(s).

The tricky part is having a correct intuition as to what you should be skipping to, sure, but that's the same problem as you have doing a proof (minus actually figuring out the justification) since humans don't generally do proofs by exhaustively listing every possible next step from what is already proven in a BFS until getting the desired result and then pruning all the other paths.


Essentially that. There is a risk a skipping a majority of the proof and hence a majority of the points, but point-wise it would work out better than handwaving the step and getting negative points.

It's been about 25 years, so my memory is a bit fuzzy, but it was an analysis class. I don't remember if I ever availed myself of this. I did have a classmate who completed, but didn't turn in his homework a couple of times. (I don't recall if it was the entire problem set or just a couple of problems that he omitted, but he got the paper out to consult while the prof was going over the answers.)

The teacher was competent but quirky - he also required the students to purchase a stapler (to staple their homework) and locked the door after class started (if you were late, tough luck).


Prompted with:

    Given A, prove Z
You'd append

    Assume G => H
...to the prompt, then you'd prove A=>G, and you'd prove H=>Z.

You're essentially just using one-too-many axioms to get the job done, which is less elegant, but correct.


I wonder how much credit the "this is the work of Satan" answers got?


None, because I never found out who submitted it.


A friend of mine took a game theory class with this system. Partial credit also given for partially correct solution. Completely wrong bullshit answers would lose 10%, or something like that.


Yes, obviously wrong answers (like negative energy) would get negative credit. They'd get >=0 credit if they'd annotate the answer with "I know this is wrong, but I can't find my mistake."

Sounds sensible to me.


He was my algorithms professor in ~1999, and he garnered identical praise from our cohort, too.

His approach to demystifying recursion is perfect (Chap 1.2): "The Recursion Fairy will solve all the simpler subproblems for you, using Methods That Are None Of Your Business So Butt Out"

Thanks for your teaching and for this project, Jeff!


I've tried a few approaches to explaining recursion ("just assume that it works", "trust yourself", "turn off your brain" (the latter of which is from Will Byrd)), but I like "Recursion Fairy" an awful lot. Might have to try that next time.


We designed a small tool to bring the fairy metaphor even to lower secondary school students. I actually believe it is a very promising approach.

"Nothing to fear but fear itself: introducing recursion in lower secondary schools"

https://aladdin.di.unimi.it/materiali/pubb/2017_latice.pdf


That looks super interesting! I'll take a deeper look when I get the chance. Thanks for sharing!


You're welcome!


Super interesting! Do you happen to have an Italian translation/version of the paper?


Just remembered: in fact the work was a master thesis, and the final report is in Italian. http://aladdinsrv.di.unimi.it/archive/pdf/tesi-previtali.pdf

Enjoy!


Amazing, thanks a lot!


No, I'm sorry... It was thought in Italian... but written only (more or less) in English :-)

But the supporting tool is actually in Italian:

https://aladdin.di.unimi.it/sw/fatine/


I've also taken his class and can attest to his subject matter expertise and excellent teaching style


Me too. His notes are amazing in my opinion, and I’m really excited to read some of this book that I didn’t cover during my classes.

I prepped for all of my interviews by just reviewing these notes, and it seemed to work really well.


I think the rule was you had to attempt at least one question (couldn't say "I don't know" for the whole exam). Am I remembering that correctly?


No, I've had several students answer "I don't know" to every question on the final exam. (About one every two or three years.) Without exception, they got a 25% on the exam and an F in the class.

Other theory instructors at Illinois do put limits on their IDK policy, like "at most 10% of the total points", or "for at most one question", or "not in my class". So far I've stuck to my guns.


making things difficult is easier, the opposite should be promoted.


> Please do not ask me for solutions to the exercises. Even if you are [an] instructor, I will say no.

That's kind of a bummer. I like to be able to check my answers when teaching myself things.

Am I somehow alone in that?


Hi, I'm the author.

I'm honestly seriously torn about this. There is a serious tension between pedagogical needs of students in formal classrooms and the pedagogical needs of self-learners. I've chosen to aim for the former. Yes, I know it's a bummer.

(From experience) providing solutions interferes with the learning process of my own students at Illinois. I have to change up homeworks and exam questions every semester, because otherwise students will look up and copy/memorize the answers instead of trying to figuring them out, which means they do worse on the exams where they HAVE to figure things out.

Also, a significant majority of the requests I get for solutions are from university students who want to cheat.

I do provide solutions to the homework and exam and lab problems I assign in any particular class, after the fact. (And I release those solutions publicly, despite the protests of colleagues at other universities, until the semester ends.) And I have started proving model solutions in the homeworks, so that the students know what kind of answers I'm looking for. (See the course materials page, under CS 374.)

In practice, my stance is becoming increasingly moot, as more of my official solutions get uploaded to places like CourseHero and Koofers and their many foreign-language equivalents.

I am very likely to include solutions for a subset of problems (maybe three or four per chapter) in a future edition. But that will take a significant amount of time, and I wanted to get something out the door.


Thanks for this explanation. This is one of the things that popped out at me when I looked at the book page. The statement about not providing answers seemed quite dogmatic, and my initial reaction was, why? Turns out it's not so dogmatic, and there's a thoughtful and nuanced explanation. Perhaps you could include some of this on the book page, or provide a link to the explanation elsewhere.


Completely agree - being outside academia has apparently made me ignorant to a lot of the pitfalls to actually posting said solutions.

Makes perfect sense when put this way.

I also really appreciate the response from the author. I forget the gurus behind books like this often have a presence here on HN.


Not always the case though Prof. Erickson seems to be the exception


Good idea.


>In practice, my stance is becoming increasingly moot, as more of my official solutions get uploaded to places like CourseHero and Koofers and their many foreign-language equivalents.

I've found that whether or not a professor releases solutions to assignments (or even tests), some segment of the student population will have access to the solutions. Releasing them publicly ensures the access is universal, instead of limited to those with the right connections/access.


I found out my last year of uni that frat and sororities had a huge, well run archive of homework, tests and solutions for every class ever taken by members. This was the first time that I had been so close to structural class benefits. This is absolutely how these kids skated through, many of which should not have graduated.


I know that my copy of CLR went into the recycling bin because the 'left as an exercise to the reader' aspect of the book made it completely useless as a reference book. Even when the Internet was wrong (as it often was) it was better than nothing (which is what CLR offers). Also the damned thing has so much clay on the paper that it weighs more than a brick.

I wonder if a better solution is to provide a set of questions and answers for the book but not use the book questions as the graded homework/exam material.

The one has to be dynamic for the reasons you listed. The other doesn't, and any professor that tries to use your book without creating their own questions will quickly find themselves in the same boat you were, regarding memorization vs internalization.


I'm a self learner and I find that having a strong sense about my solution is usually good enough. If I feel my answer is brittle then I should simply review the problem statement or anterior definitions and theorem.

I usually look at the solutions when I've come to the conclusion that the problem is just too difficult and typically discover I hadn't thought of something that made the problem accessible. I would argue that if there are sufficient examples within the chapters, not providing solutions to the exercises shouldn't be a problem.

That is unless you have multi-month/research level problems à la Knuth.


> (From experience) providing solutions interferes with the learning process of my own students at Illinois. I have to change up homeworks and exam questions every semester, because otherwise students will look up and copy/memorize the answers instead of trying to figuring them out, which means they do worse on the exams where they HAVE to figure things out.

While I understand the desire of a professor to help out their own students, at what point does this become a matter of individual responsibility? Shouldn’t it be the student who refrains from memorizing answers?

When I took my algorithms course, we had a reference textbook with questions at the end of the chapter, but no solutions. Some people at the beginning of the semester tried to crowdsource the answers using a Google doc but that effort failed. It didn’t really matter though, since our homework and exam questions were always renewed each semester through the work of course staff and the instructing professor. In hindsight, I may have taken this for granted as it seems other schools don’t have a “training” and a “testing” set of problems - I would say having both is better than only having one.


Of course the student is ultimately responsible for their own learning, but as the instructor, it's my responsibility to help them learn.

Dangling a juicy piece of bacon in front of their noses will not help them eat their vegetables.


There appears to be some evidence to suggest that traditional (grade-school style) "graded homework" at the advanced undergrad and higher level may just not be particularly good pedagogy though [1]. If possible, it may be better to assign ungraded homework with complete guided solutions provided, plus regular in-class quizzes (the latter to motivate students to actually spend time on that assigned homework every week) [2].

[1] https://scholarlycommons.pacific.edu/soecs-facpres/16/ [2] https://www.tandfonline.com/doi/full/10.1080/00091383.2011.5...


As a cs student I am quite conflicted about this as well. Sometimes we do need answers, either to help break into understanding a problem or to confirm that we have not come to a terrible conclusion.

On the otherhand many classes are graded competitively, so there is enormous pressure to do well on homeworks while you know others have no scruples about being dishonest.


I find this approach bizarre and honestly kind of backwards. My favorite math professor assigned homework but essentially (sometimes literally) didn't grade it at all. It was strictly for our own learning. I did it on scratch paper and threw it away, but enjoyed it more because I didn't have to worry about formatting it for clarity. The exams constituted almost all of our grade, and were plenty to convince at least me to take the exercises seriously. I've often wished every mathish course was taught this way. You don't have any choice but to change the exams every time (or people will cheat), and exercises are entirely designed to help us learn, so why not lean into it?


Thank you very much for making your textbook available.

Maybe having a separate set of problems for the online version, with solutions, would be appropriate?

I have been wondering this for all freely available textbooks: have you ever thought about turning your book into an open source project? I see you already manage issues on GitHub, but how about managing the actual content there as well?

Producing a printed book might need tighter editing than shipping software, but I feel like there are a lot of books with partially overlapping content, and I feel like self-learners would be better served by larger books that are not meant to be read end-to-end, but that are "integrated" into a larger whole.


Yes, I have thought about making it into an open-source project, like Pat Morin did with his Open Data Structures textbook, or Boaz Barak with his Modern Complexity Theory textbook. (Both highly recommended, BTW.)

But I'm hesitant to release the LaTeX source files in their current (rather grungy) form. Too much of a control freak, I guess. Maybe for the next edition.

Also, the figures are all in a closed file format (OmniGraffle). In principle, I could convert everything to an open-source format like svg, but (0) converting everything would be hell, (1) LaTeX doesn't understand svg files directly, (2) in principle, I can connect latex to inkscape to convert svg to pdf, but the translation is always imprfect, (3) using Inkscape makes me want to tear my hair out, and (4) I'm a control freak.

And don't even talk to me about tikz.


Thanks for the answer, and the pointer to those books! I hadn't really thought about the layout challenges.

Though I think the benefit would be less about just releasing the source, though the Open Data Structures book makes a compelling case for that, but fostering collaboration on the content. The books you mention have overlapping content and my heart really wants to join them in some glorious whole :)

Maybe that's crazy, the books clearly have different styles, but I feel like writing more advanced texts as extensions to an evolving intro book would be better for self-learners than just having the books stand alone.

FYI, the link to your course materials on http://opendatastructures.org/ seems to be broken now.


For what it’s worth, I find that the answers to the problems in tAoCP are almost always a good balance between explaining the steps to get to the solution and leaving enough out that the reader still has to work to understand how the proof works. But that’s not applicable to all types of exercices.

Your proposed solution (include solutions for a few problems) is definitely a good approach: enough to help the self-learners along while forcing the students to work—I assume you’ll be assigning the unsolved problems in class :)


thank you for the reference material! i'm a student at central michigan right now and will be heading into my intro. to algorithms class this upcoming semester. i'll be sure to use it to supplement (and not pester you for free answers)

from what i see elsewhere in this thread, you've got quite a good reputation around illinois. i recently lost a favorite professor of my own mid-semester last year, and it made me realize just how much a good instructor means to people. so thank you for being that for students!


Maybe for some problems it is possible to include just the final answers? e.g set of possible inputs and expected outputs - so one can test the solution.


Thanks for making this available under a Creative Commons license, which I also use for several of my books.


Thank you for the wonderful material you've made available!


Idea from common job interviewing process:

Make the unit tests public. Or useable interactively online.

I know it'd be a lot more work. If you had a repo for unit tests, I'd contribute.


WHAT unit tests? This is not a book about programming. The solutions are not code. None of the homework in my algorithms classes CAN be auto-graded.


Oh. Serves me right for not peeking at your book first. My bad.


You could consider providing answers and feedback in a premium priced forum or email subscription.


Ew. Ew ew ew. No.

Wait, did I say that aloud? Sorry, I meant "I'm already busy enough, thanks."


Or, you know, he could just keep being a professor in a top-ranked university CS program. :)


How is that a solution? It would provide well-off students with an opportunity to skate by and exacerbate the problem of having solutions circulating in the wild.


It would increase the odds that anyone getting the solutions is using them to enhance their learning experience rather than as a crutch.


The princeton algorithm site has solutions for some of their exercises. Also they have online lectures as well along with animations, visualizations, etc. It's an excellent resource for those who want to learn algorithms.

https://algs4.cs.princeton.edu/home/


I have seen many people voice that complain. Personally I rarely had issues verifying a solution once I have worked through a problem.

What happens much more frequently, is that I get stuck somewhere, and get tempted to look at a solution if one is provided, which in turn nullifies the achievement. In this sense, not having solutions, makes it easier for me to plough through.


If you prefer not to have solutions, pretend you don't have solutions. Having solutions is strictly better (for the reader. I'm ignoring the cost of producing them to begin with)


"pretend you don't have solutions" is not a realistic approach. The strength of people's conviction to this will probably fall on a bell curve and few can resist the temptation when the problems get really tough. Accounting for how humans are, I do not see how one can easily say "Having solutions is strictly better". I can easily think of cases where it is indeed better to have solutions but to say "strictly" requires a bit more diligence.


Don’t rob the self-learner that doesn’t have access to TAs, fellow students, and professors the ability to check their work, just because someone else doesn’t have the discipline to not abuse it.

Textbook solutions are good for those that aren’t in school, aren’t in formal programs and have no other way of receiving feedback.

The “you should know if you’re right” mentality doesn’t necessarily fit a person that’s been working for 10-years and has been out of the academic mindset. One that is a beginner and could easily fool themselves into thinking they have a correct answer.

It doesn’t allow for correction of false thinking. Anyone can think their proof is correct. But fewer beginners can properly recognize when they are wrong.

This sort of mentality is a bit elitist and gate keeping.


I had a girlfriend who was doing her PhD in Physics. I remember one night she and her classmates spent all night working on a problem, that was essentially unsolvable. The next day they go to class and all of them made their best attempt, but no one could complete it. The problem? The professor accidentally used the wrong metric on one of the numbers meaning that they couldn't do the steps to what should have been an easy solution. Now they noticed this, they are a smart group of people, but that's not what the problem was, so they spent hours and hours trying to solve what he gave them without any hope of actually doing it.

In the end, he was like "Oh my bad," and corrected his mistake. The point of the story is that they were able to essentially ask him for the solution and they were able to check what they'd done, and in the end he made the mistake. In situations like this, he should provide the answer if nothing else to show that he didn't make a mistake in the problem set. People are fallible, no matter how brilliant you are.


I had a similar thing happen in high school physics. We were were suppose to figure out where and when a projectile was going to land. The only problem was that it was never going to land—-the initial velocity was too high.

In retrospect I think it was a great lesson for my future career as a data engineer. Doesn’t matter what the source is, any datum can be just plain wrong.


> The only problem was that it was never going to land—-the initial velocity was too high.

As in, it escaped the gravitational field?


Yep.


On the other hand, struggling all night to solve an un-solvable problem is a valuable learning experience, too. In the real world we struggle all the time with problems that don't have solutions.


That professor probably had produced the answer already, using the metric he had initially intended. Having a set of answers doesn't mean you didn't make a mistake in the questions.


But it does clue you into whether the professor used a different set of assumptions than you did.


Presumably you wouldn't see the answers until after you spent all night on the assignment.


There's a story of a prof injecting one error into every test.

I had a industrial engineering teacher that did something similar. Every team had a (fake money) budget for each project and bought materials from him (sole source). He'd randomly cut corners, to keep everyone on their toes. Good lesson for the real world.


I took a quick look through the first set of exercises and a lot of the problems are open-ended and don't have specific answers. Seeing a different answer than yours doesn't tell you much about whether your answer is correct.

Even if you do have a similar proof to the one in the answer key, that doesn't mean your proof is correct. Correctness can often hinge on very subtle distinctions. You really need a TA or instructor to read your proof if you want meaningful feedback.


As I've said elsewhere, I'm looking out for my own students first.

One of my colleagues suggested setting up a "club" on PerusAll (https://app.perusall.com/welcome) or something similar that would allow people to discuss to book and/or work through problems collectively. (They have a club for CLRS, for example.) Right now all their "clubs" are restricted to books with Reputable Publishers (ptui), so they might need some persuasion.


Presumably nothing's stopping you from putting a book of solutions and posting them online. It seems like a much easier task than writing this book, with all its problem sets, and then posting it online for free for the world to read.


It might be an elitist and gate keeping mentality but I have to say that calling providing a free resource to someone, but not tailoring it to fit their exact situation, "robbing" is a very entitled one.


It is absolutely a realistic approach, what are you talking about? Is it also not a realistic approach to eat your vegetables or perform routine exercise like walking around the block?

Without solutions at the back of the book, self-directed learners can't verify their own work. Don't cut out the less privileged learners from bettering themselves just because you have no concept of discipline.


Well. It has worked really well for me and obviously a bunch of other engineers as a number of our books had solutions at the end of each chapter.

As long as cheats are only available for ungraded homework/self assessement tests cheaters are only going to cheat themselves.


I completely agree.


yes, exactly.


Solutions are to questions as tests are to software. For me they offer peace of mind no matter how sure I am.


Solutions were stepping stones to my success. I am mature enough to not be tempted without trying out the problem. I sleep over it and then at last looking at the solution, if I ever find myself at my wit's end. That is because I have a firm understanding that there is no shortcuts to success. Having come from a third world country where cheating on oneself is delaying even the basic resources to get a job, I was raised with peers finding the 'peeping at solutions' an abhorrent act. Its a big shameful thing to do in my culture.

Solutions are the way I learn the thought process of author if he arrived at the final answer in a better way. Solutions expose errors in questions. Errors and typos in Indian texts were way more a common place in questions than in solutions. There was simply not a chance for a solution to have typo or logical errors.

If there are no solutions, I'd simply put it for later until I am proficient in the subject and want to try out the problems with confidence. There is always this insecurity when it comes to dealing with logic. Little things can slip in abstract thinking pretty easily. Silly mistakes take a toll. People like myself prefer to solve problems individually at our own pace. Solutions are a must for me. Of course, that 'later' almost always never comes.


No, you are not. The extensive problem sets at the end of each chapter is main contribution of this work (imho). However this is not a shortcoming these days.

I spend upto ~1 hour on each problem. And then I google keywords from the questions. Someone somewhere has one possible solution. Usually , this is enough to set me on the right track.


> I like to be able to check my answers when teaching myself things.

I agree, although looking at the exercises in this book, they seem to be of the form where the only way to give you the answer is you give you the full solution, and they appear to be phrased in a way that makes it easy enough to verify the answer yourself once you’ve figured out the problem. I always feel like the ideal (for me at least) are problems with numerical answers listed in the back of the book – then I can check my answer with the book’s answer; if I got 42 but they got 5, I know I made a mistake somewhere but it’s still up to me to go back through and figure out where the mistake was. It seems like most of these problems don’t lend themselves to short numerical answers like that.


as is the case with all uni oriented textbooks, they can't be used alone if you don't instantly grok the content. i'm sure just searching github for a sample implementation followed by a discussion of the finer points on stackoverflow/programming reddit of choice will do you

a 5 minute search gave me this - https://github.com/floyernick/Data-Structures-and-Algorithms for algos implemented in Go. github is littered with these kindsa samples(some of them highly vetted) https://github.com/topics/algorithms-and-data-structures


The author probably agrees with you. He indicates that he provides solutions for most of the problems he assigns to his class.


The book sources are available through GitHub, you or someone else could easily fork the project and add all the content that in your opinion is missing (including answers to exercises).


could be a monetization idea/incentive to purchase, purchase the book get unique key, use key to check in to site to see results.


Nope. Anything I publish will be available free.


This is done occasionally in algo books to encourage their adoption in cirricula (e.g. Dasgupta, etc.).

As a sibling comment mentions, it's relatively easy to verify your answer for correctness in some cases. The downside of this is that it becomes difficult to assess whether students are actually learning. One way to mitigate this is to withhold answers and assist with other teaching resources like TAs. This also helps teach the aforementioned 'verification' step.


The logo on the right is the arabic letters of the word Algorithmi (al-Khwarizmi) the persian muslim scholar after him algorithms and logarithms were named


It reminds me of an ashtray design which is famous in Sweden, by Stig Lindberg: https://balstaauktionshall.nu/images/custom/ProductTemplate/...

I remember tracing the lines with my finger as a kid, trying to decipher a hidden logic...


Algorithms are named after al-Khwarizmi, and "algebra" comes from the Arabic name of his book, but logarithm was coined from Greek.


that is beautiful! thanks for pointing it out! is it readable for people who can read Arabic?


It still takes a few seconds to identify the word since this is not how Arabic is usually written but yes the word "Al-Khawarizmi" is legible.

In fact, the logo contains text both in black and navy blue, the navy blue part is "Al-Khawarizmi" or in Arabic "الخوارزمي". Then the navy blue script is rotated at 90 degrees 3 additional times.


no, it's very readable because it's a well known calligraphy called kufi as others have pointed out.


Yes, it is readable, it's called kuffi,



thank you!


IIRC al-khwarizmi means `from khwarizm` a region near Irak


Logo on the right of what? All I see is a geometric design in the middle of the cover. It doesn't look Arabic and it seems like it would be very cumbersome for someone to sign as a name.


it's a known calligraphy style called kufi, just google it.

https://www.google.com/search?q=arabic+calligraphy+kufi&clie...


Thank you.


The lines in blue in the cover image spell out the word


From another comment: http://www.kufic.info


Thank you.


It's Iranian QR-code.


No, kufi is named after a city in what is now Iraq not Iran.

https://en.m.wikipedia.org/wiki/Kufa


Yes, but it used to be in Persia.

delinka 49 days ago [flagged]

...after whom...


Al Gore.

Al-Gore-Ithm


I am, in all likelihood, not in the target audience (having already spent much time with the material being presented), but this textbook has been a joy to read so far. I am only up to page 14, and I already need two hands (in base 1...) to count the number of times I've laughed aloud or cheered a particular point being raised.

Most algorithms books are dry, or they're obsessed by particular formal details, or (worse) they implicitly include optimizations in the algorithm without explaining what is needed for correctness and what is needed for efficiency. Having tutored on one of the books mentioned in the prologue, it can be a real struggle to gain a true intuition for algorithms when you can't yet tell the difference. But the sheer personality contained within this book is infectious, and it really is something of a page-turner. (Lest you think I haven't gotten to the stuff that "matters", I have read through the chapter on depth-first search -- again, an enjoyable read!)

What a great book. I'm definitely recommending this to my friends.


Same. I fell in love with it instantly and permanently when I got to the "NO, STOP, YOU'RE DONE" part in the Towers of Hanoi. I skipped around the book and the whole thing is like that. It's simultaneously concise and witty in service of the material.

The chapter problems look fantastic, too, and are just as well-written (and they cover a lot of ground).

It's really something!


gonna take this opportunity to ask for advice: i have an MS in CS and i've gone through all of CLRS twice (yes really all of it and really twice - once for my grad algos class and once in prep for interviews - and i still don't have whatever intuition i need to be able to effortlessly do DP. it's honestly kind of maddening - mincut/maxflow, RSA, knuth-morris-pratt etc are all completely obvious to me and i can whip them out pretty much effortlessly - but DP i struggle to find the optimal substructure and formulate bottom up (yes i can memoize but that's not clever enough for you know who). what's the magic combinatorial perspective/intuition that enables people to construct dp solutions so quickly??? yes i've read vazirani and gone through the clemson examples and etc. and skiena and sedgwick whatever but the problem is they're mostly all rehashings of the same solutions/perspectives. looking forward to this book's perspective.


Forget about efficiency for the moment and focus on discovering the underlying recursive problem.

LOTS of people struggle with dynamic programming. But in my experience, 90% of the difficulty with dynamic programming is actually discomfort with recursion, which is why I talk about recursive backtracking first.

Try reading Chapter 2. My goal in that chapter is to show the process of deriving recursive solutions---how to think about the problem, and what questions to ask---rather than just presenting the solution as a fait accompli.

I do have to assume that you believe in the Recursion Fairy, though. That's probably the hardest step. Computer scientists are TERRIBLE at delegating.


> Recursion Fairy

I mentioned this elsewhere but figured I'd bring it up here since you're the author.

I've tried explaining recursion in a lot of ways like this, such as "just assume it will work", "trust yourself", and "turn off your brain". (The first two are paraphrases from Matthew Flatt, the latter is from Will Byrd.)

I think "Recursion Fairy" is my favorite way to phrase the same idea. I think there's something about the nature of invoking a sense of magic in the phrasing that might help people really believe that it's okay to just let the recursion do its thing and not think about it too much. I'll definitely be using "Recursion Fairy" when (if) I end up explaining recursion again.

Thanks for making your material available for free! Cheers!


i don't have a problem with recursion per se - certainly things like tree traversals (e.g. minimax) are very very obvious to me and for example just a couple of days ago i was thinking about AD for recursively defined functions - but optimal substructure (what i think CLRS calls overlapping subproblems) is what i have trouble with. anyway thank you writing the book - i look forward to reading it and getting yet another perspective.


This book has about 26 pages dedicated to Dynamic Programming (the remaining 36 pages in the chapter detail different exercise problems but I'm sure are just as educational).

I think a strong foundation in recognizing the recurrence is key to becoming better at Dynamic Programming. If you're not able to first see the recurrence, then you're doomed to rote-learn a la videos on youtube where the main focus is table filling (I'm looking at you, Tushar Roy). Here's a snippet from the DP chapter in Erickson's book:

> In a nutshell, dynamic programming is recursion without repetition. Dynamic programming algorithms store the solutions of intermediate subproblems, often but not always in some kind of array or table. Many algorithms students (and instructors, and textbooks) make the mistake of focusing on the table— because tables are easy and familiar—instead of the much more important (and difficult) task of finding a correct recurrence. As long as we memoize the correct recurrence, an explicit table isn’t really necessary, but if the recurrence is incorrect, we are well and truly hosed.

On a quick skim of the chapter, here's what it offers that other traditional book chapters don't:

* Progressive optimization of memoized solutions. The example used is Fibonacci where each recurrence is cached, then it's transformed into an iterative for-loop based solution where a table/array is filled, and finally 2 variables are used instead of the whole array to store intermediate solutions.

* Dynamic Programming on trees (both as a datastructure for storing intermediate solutions and sometimes as a problem ex: Optimal BST construction)


I think if you find those algorithms obvious and you're able to write memoized solutions, then you already know most of what you need to know about dynamic programming. For example, Dijkstra's algorithm for shortest paths can be seen as DP.

The main thing is to write down a formulation for the answer for n in terms of answers for smaller n, or the answer for (n, k) in terms of answers for smaller (n, k). (And don't worry about how you'd compute them, just focus on getting a correct expression in terms of smaller ones. http://okasaki.blogspot.com/2008/10/score-one-for-induction....)

It can be tricky to formulate exactly what you're counting / measuring (e.g. what's "k" and what's the quantity you're optimizing for (n, k)), but once you have that, and once you have the recurrence, you can consider it a separate (and independent) step to figure out in which order to compute the values, so that you have each value by the time you need it (i.e. the bottom-up formulation that you said is the tricky part for you).

Beyond that I guess lots of practice with problems… e.g. about a decade ago there used to be weekly(?) TopCoder contests with editorials written later explaining the solutions (and the problems were graded by difficulty and even how many people solved it), and the DPs could get quite tricky. I believe those are still happening, and now there are other resources like LeetCode or whatnot.

Do you have an example of a problem that you struggled with, to see what's missing?


"Dijkstra's algorithm for shortest paths can be seen as DP"

Really? Sure, if the graph is a dag, but then Dijkstra is overkill.

Hmm. I'll have to think about this one.

+1 for the Okasaki shoutout.


Hmm, I wrote that from vague memory, and perhaps that's debatable! (There seem to be different communities that use “dynamic programming” slightly differently.)

But my main point was: if one can look at the shortest-path problem and formulate something like

distance(v) = min_{u->v} (distance(u) + edge_length(u, v))

then one probably has a handle on whatever it is that's hard about dynamic programming. Actually Wikipedia cites some references that consider Dijkstra's algorithm a case of DP: https://en.wikipedia.org/w/index.php?title=Dijkstra%27s_algo... and the same text (Wikipedia doesn't care about “self-plagiarism”) is also on the DP page where Dijkstra's algorithm is given as the first example: https://en.wikipedia.org/w/index.php?title=Dynamic_programmi...


Here's an old mathematical perspective that treats dynamic programming as analogous to the calculus of variations. Very different from a how it's usually talked about, I think. I haven't read much of it, but perhaps the change in perspective will be helpful to you:

https://www.rand.org/content/dam/rand/pubs/reports/2006/R441...


Dynamic programming in CS and dynamic programming in operations research share almost nothing besides the name.


i'm familiar with this (through stochastic control and reinforcement learning and general econ stuff) but it's not really useful for the whiteboard coding style DP that FAANG asks.


I'm honestly somewhat surprised that you-know-who is asking DP problems with that degree of complexity. Are you applying for a research/math-heavy role? If you're unlucky enough to see DP, usually it's pretty basic and memoization is considered fine, unless things have changed very recently. If they required that degree of rigor for regular SWE jobs, a lot of people I know certainly wouldn't be there.


> If you're unlucky enough to see DP, usually it's pretty basic and memoization is considered fine, unless things have changed very recently. If they required that degree of rigor for regular SWE jobs, a lot of people I know certainly wouldn't be there.

Did you interview in the past two years at the so-called FAANG companies? 'Leetcoding' has raised the bar considerably, even (or especially, in some cases) for senior positions.


I took Jeff's Algorithms class in college and he's one of the best professors I've had. I still struggled through the class but definitely made it through with a great experience.


I took CS 173 and 373 from Jeff nearly 20 years ago, with 273 from another great professor. That course sequence along with Combinatorial Game Theory (which I took on a lark) has had a greater impact on me than the rest of my university experience combined. A lot of that is due to the quality of the professors I was lucky enough to have.


Perhaps you might, when you get a chance, email him a brief note to tell him that. That'll make a person's day.


You know what, your book on linear algebra is really great too. I tried to understand LA from too many books but your book was the one which made sense for me.


:-) Thank you.


Day made!!


He's in the comments here so I suspect his day is already made! :)


Yeah, this may the 3rd or 4th time I've said something along those lines in a HN thread he's active in over the last 10 years.


CS 598 JGE with jeffe which covered more advanced and niche algorithms was 10000% the most impactful cs class I've ever taken.


Saw his name at the top of HN and immediately came to post the same thing. He taught me how to think and problem solve in a whole new way. Was the most memorable class and instructor for me.


Isn’t this the guy that is famous for being admitted to a PhD program with an exceptionally low GPA?

If so, why is he the exception and why aren’t more PhD programs looking for non-traditional talent?

Edit: I read his blog post. It gave me more insight. It looks possible for people with those sort of grades to be admitted even today, but they seem to need a cheerleader on the inside that will help them.


I graduated with a Comp.Sci bachelor with a similar GPA. Spent around 10 years as a programmer in the industry and came back to get into the masters program. I was almost laughed off (a good thing) stating I would have to do another bachelor.

Sold my house, got rid of all my stuff and enrolled in pure math bachelor's. Best decision of my life.

I though that in my mid 30s with a lot more discipline, being able to work longer and harder would've made up for the shit grades. I'm extremely thankful to the department head for waving me off like that because it would've been a disaster.

Clearly Jeff could handle himself, however I think one must really know what they're doing. I thought I'd end up doing the 3-year math program in 2... it'll turn out to be 3.5 with good grades this time around. After that I know I'll be very solid for master/phd programs.


> I graduated with a Comp.Sci bachelor with a similar GPA. Spent around 10 years as a programmer in the industry and came back to get into the masters program. I was almost laughed off (a good thing) stating I would have to do another bachelor.

Can you expand on this? I'm similar right now. Want to go back and get my masters. Graduated in CS about 12 years ago. Mid 30's. What do you mean about having to do another bachelor? I have not heard this before.

> Sold my house, got rid of all my stuff and enrolled in pure math bachelor's. Best decision of my life.

I recently just bought a condo downtown. One because I never plan to move again and rent goes up at least 3-5%/year and two because I can easily rent it out if I do end up leaving. The University of Minnesota is only a mile away from me at the moment. Mortgage is probably a little high for a grad student/TA salary considering I'm making well in to the 6 digits right now. This is my only debt and all my stuff is not much. Mostly things I can't really sell for much now or are required to work and live (computer, desk, monitor, clothes, eat, sleep). I have a couch and a TV to relax outside of work.


My GPA was 2.37 and I would've had to take so many classes to increase my GPA up to the cut off point of 2.7 (B-). With 90 credits worth of classes, you'd need 90 credits at 3.0 (B average) to make up for the difference, or less if you get better grades and that means you're on the low end of what they accept.

Now I presume if you did 2-3 semester with a B+/A- GPA, they would take that as testifying you can handle the masters and let you in, however I'm enjoying math way too much right now and there are several classes (Algebra 1-3, Topology, Differential Equations and several classes in statistics I intend to pick in the option block) that will come in handy if I head into ML masters I'm also interested in Type Theory and all of Discrete Mathematics. Also, I feel math is a great program to improve at general problem-solving.

The first semester was really challenging, some classes I had in fact already done (Calculus 1 and Linear Algebra) turned out to be surprisingly difficult. I'd say around halfway through the second semester I felt I had gotten my younger brain back.

I don't have expensive habits, have two restaurants meal a week, own a car and live at the university residences (rent is 390$/month, parking 850/year). My cost of life seems to be around 12k CAD/year and am Canadian (tuition costs are 1600/semester) so I really have no idea how much you'd have to have saved up if you are in the USA.


My GPA 12 years ago when I finished was 3.5 I think. There about anyway. I figured you were referring more to the length of time you were out of school, not your GPA.

My mortgage is far north of $390, haha. I don't have or need a car though. I lease out my parking spot for $150/month or more. If I go and get accepted I'll have either work pay or try and see about scholarship or something. My brother and best friend got free rides through their master and Phd.


You don't need to pursue a second undergraduate degree if you just need to fulfill certain courses for graduate admissions.

Check out: https://ccaps.umn.edu/non-degreeguest-students

See if other schools of interest have similar programs if that one isn't a good fit.

See if you can take courses for credit/get research experience. Then apply for your desired graduate programs.

This is a good way to get letters of recommendations from professors too.


Well done!


Jeff, I'd be curious to know. Going through TAOCP is on my lifetime-to-do and feel I am getting close to tackling it again however I have no time right now. There's also so many other things I want to go over (some higher order logic, TAPL, PFPL, the Software Foundations books, compiler design and probably won't pass up some category theory being abstract algebra seems accessible to me).

Do you feel TAOCP is worth the time investment or should I just forget about it and tackle something like your book and spend the rest of my time on other topics?


I'm not sure I can answer that question for anyone but myself. I've worked through quite a few pieces of TAOCP when I've needed to understand a particular topic, but I always find that I lose interest.

But then I've never been able to learn anything by just reading. I always have to have a target problem in front of me, and then I'll read (and get frustrated by) every book ever written to figure out the best way to think about that problem. (Which means I've read a few dozen pages from hundreds of books, and I have pretty huge gaps in my math background -- abstract algebra and category theory being two big examples.)

For some target problems, TOACP has been incredibly helpful, but for most of them it really hasn't. Knuth and I just care about different things.

For the same reason, I can't recommend that anyone work through EVERY problem in my book, either. Find the parts that are interesting and/or useful to you, and work on those. If you get tired or frustrated, work on something else; maybe you'll discover another reason to pick up my book again later. Or not.

Climbing the mountain is much more rewarding than studying the trail map.


As you've already got an answer from Jeff, I thought it might not hurt to add an additional one. IMO, you should ask yourself why you want to read TAOCP; doing it just because everyone recommends it is probably not worthwhile. Read it if you find the material or presentation interesting.

IMO one can think of each chapter (only 6 completed so far) or even each major section of TAOCP as a very deep book/monograph on that specialized topic. The writing is clear and delightful (IMO), but each of them does tend to go rather deep, more than you may care to know about that topic — so each page will require a fair bit of attention; it's not easy going. You'll get an in-depth understanding of a narrow sliver of topics.

Why not read a few of the newer sections and see if you'd like to read more in the same style? Knuth has been putting draft versions online, and they are collected here: http://www.cs.utsa.edu/~wagner/knuth/ — for example, you could read Pre-Fascicle 3B, which is on generating all [number-theoretic, or set] partitions, or Pre-Fascicle 1B, which is on a fascinating (and little-known) data structure called Binary Decision Diagrams. He uses these to solve many interesting problems, different from the focus of typical algorithms books (which would probably dismiss these methods as “brute-force”, as they don't affect the asymptotic complexity but do affect what's practical to do on real computers).


Thanks for sharing, really inspiring.


There was a small study that came out recently on biorxiv, with 29 PhD students with GREs all over the place -

>GRE scores, while collected at admission, were not used or consulted for admissions decisions and comprise the full range of percentiles from 1% to 91%. We report on the 29 students recruited to the Vanderbilt IMSD from 2007-2011 who have completed the program as of summer 2017. While the data set is not large, the predictive trends between GRE and long-term graduate outcomes (publications, first author publications, time to degree, predoctoral fellowship awards, and faculty evaluations) are remarkably null and there is sufficient precision to rule out even mild relationships between GRE and these outcomes. Career outcomes are encouraging; many students are in postdocs, and the rest are in stage-appropriate career environments for such a cohort, including tenure track faculty, biotech and entrepreneurship careers.

Now a GRE isn't a GPA (one's a specific test, one is your grades' average), but both are being used for university admissions, and I reckon both are not good predictors.

Edit: Link: https://www.biorxiv.org/content/early/2018/07/20/373225


Yes, that's me.

Admissions committees are looking for evidence of future success. Admitting applicants with spotty (not merely "nontraditional") backgrounds is risky -- they might be a diamond in the rough, or they might really be a weak student. And (at least departments like mine) there are far too many applicants with stellar backgrounds to justify taking that risk.

At least, that's the usual argument.


What advice would you give to undergraduates that are interested in getting a Ph.D who don't have the best of grades?


- Work on, or help out with, a research project of a professor/established researcher in your field of interest.

- Related, getting authorship (first or otherwise) for an academic publication as an undergraduate is a promising signal of future research success.

- Also related, having great references from undergrad professors who are involved in research.

- Connect with faculty in the PhD program you're interested in.

Context: Got bad grades in undergrad courses related to my particular specialty (digital audio signal processing), but also did all of these, and was admitted to a top PhD program in my field.

Also, a lot of people in my program had success with this one:

- Applying for a masters in a related field or at a department that also has a PhD program that you're interested in, and having success there.


Yep, all this. Let me add two more points:

- Own your past mistakes. They happened. Don't pretend they didn't. Figure out the underlying cause of those mistakes, and gather EVIDENCE that you've resolved that cause.

(In my case, I was a LAZY undergrad. I'd never had to work in high school, and so I didn't know how to work in college. And then I got a real job, and it was either do the damn work or it'll be there tomorrow only the boss will be there in my office wondering why the hell I'm costing the company hundreds of thousands of dollars a day and why can't you just get this shit DONE already. And so when I applied for grad school the second time I had "smart but lazy" letters from my old professors, stellar GRE scores, and "smart and works hard" letters from my managers.)

- APPLY WIDELY. You are at a significant disadvantage compared to other students with stronger backgrounds. Do not imagine that your passion and good intentions and maturity are enough to get you into the top programs, or even into any particular program. You're playing a lottery that's stacked against you; buy more tickets.


That's very good advice! One point in particular resonated with me as it seems to be applicable to science in general:

> You're playing a lottery that's stacked against you; buy more tickets.

I take this to mean that is makes more sense to apply for many things, re-submit publications often (not without considering the feedback of reviewers, obviously), and so on.


Would you take a risk on someone that graduated with below a 3.0 GPA with Cs in their math/computer science courses, but went back to school, took some core CS courses, aced them, got some research experience, and then applied to PhD programs?


There are a lot more people going to university these days, and grades are inflated. That's the short story. If you have to look at many many applications for a spot at a top university you are more likely to choose someone who is a "traditional" talent.



> ́ Black spades indicate problems that require a significant amount of gruntwork and/or coding. These are rare.

> ∆ Orange stars indicate that you are eating Lucky Charms that were manufactured before 1998. Ew.

(p. vi)

:)

Really like the book already. The preface falls under the category of advice on learning to learn. Thanks for all your work. #stealthisbook


>Caveat Lector! >Of course, none of those people should be blamed for any >flaws in the resulting >book. Despite many rounds of revision and editing, this >book contains many mistakes, bugs, gaffes, omissions, >snafus, kludges, typos, mathos, grammaros, >thinkos, brain farts, poor design decisions, historical >inaccuracies, anachronisms, >inconsistencies, exaggerations, dithering, blather, >distortions, oversimplification, >nonsense, garbage, cruft, junk, and outright lies, all >of which are entirely Steve Skiena’s fault.

Coming from a class where we used a decidedly poor textbook for algorithms, this looks like a joy to read.


Maybe that because english is not my native language, but I honestly find this way of material presentation as rather confusing.

PS: oh and of course all those 'It’s quite easy to show that the...' and similar 'by this point it should be obvious'. No, it is not... I guess I will never understand or use those algorithms even though I'd like to.


Hi, I'm the author. If you find any particular claim of "obviousness" unclear, submit an issue request!

But please don't confuse "straightforward" with "obvious". Sometimes I deliberately gloss over mechanical details because I think they're a distraction from the main point. I'm NOT claiming that you should immediately know how to fill in the details; I'm claiming that filling in the details is boring.


I would agree that words like "obvious" or "trivial" can be disheartening to a learner.

If it's obvious to the reader, they know it whether or not you tell them it's obvious. If it's not obvious to them, then they will try to figure it out. Telling this person "it's obvious" only serves to make them feel bad about not getting it right away.


I felt that way when studying math in college, but after sometime I actually came to like the use of "clearly" and the like as it can be used as a check on whether you've spent enough time internalizing the previous information. It's one thing to have a text or a person hold your hand through algorithms or theorems, it's another to be able to do it yourself. So getting hit with a "clearly" that feels unjustified is often a signal to let go of the guiding hand and go back and review until the statement in question does become clear.


Sometimes this is th'e case, yes, sometimes it is not.

An example: http://jeffe.cs.illinois.edu/teaching/algorithms/book/Algori... Page 15

> It’s quite easy to show that the singing time is Θ(n2); in particular,the singer mentions the name of a gift ∑ni=1i=n(n+1)/2times (counting thepartridge in the pear tree).

I'm pretty sure that I still remember how to read the formula and I even know what does Θ(n2) means, but it's still unclear for me how do we get n^2 and this formula from the "NDaysOfChristmas(gifts[2..n]):" example.


I agree as well. Maybe it serves to alert the reader to when they are missing a piece of prerequisite knowledge, but there are ways to do this that don't create as much friction.


Now that is something mathematician quickly learn to not be bothered about:

the proof is easy and left as an exercise

The main takeaway I got from Polya's How to Solve it is that properly conceptualizing the problem in your head before attempting to solve it is key.


I used these lectures as a method for learning algorithms as a phys/mech eng with no CS background. I found them incredibly challenging but the lectures were written exceptionally well. On an internet with thousands of resources on this material, this was the very best I found.


From the second page (verso page?):

> Download this book at http://jeffe.cs.illinois.edu/teaching/algorithms/ or http://algorithms.wtf

algorithms.wtf is a beautiful url!


I have been reading through these and find them really unique in a good way. I got a new perspective on a lot of stuff I already studied I have a Bachelors in CS and graduated 1.5 years ago.

It doesn't feel formal like a textbook and yet doesn't sacrifice on the mathematical rigor. I would be trying out the exercise problems which seem equally daunting but fun.

I also submitted an issue request on Github and Jeff I have a few questions for you that I put on Quora, I have A2A'd you.

Lastly, thanks for taking the time out and putting content like this out for free. It helps millions of autodidacts like me.


Jeff's CS 374 was one of my favorite classes at U of I. It really opened up my mind to thinking about Computer Science as a branch of mathematics, and completely changed how I approach my work today.

The book form of these lectures was in EARLY rough draft when I took the class though-we were proofing chapters for him!


"Sedgwick’s reformulation [of AA trees] requires that no right child is red. Whatever. Andersson and Sedgwick are strangely silent on which end of the egg to eat first."


I had him as a professor as well. He’s a brilliant guy and educator, but could be a little rough with questions; i.e. making you feel a little dumb. His notes though were always excellent.


Jeff is also a frequent poster on Academia.StackExchange. He helped me quite a bit by answering my questions about grad school life.

https://academia.stackexchange.com/users/65/jeffe


> Chapter 0 uses induction, and whenever Chapter n−1 uses induction, so does Chapter n

I love this.


Wow students are so lucky for the sheer number of resources available today. Great post and thanks for sharing!


Great resource, though I wish it was available as an EPUB (or MOBI, or AZW3).


You can get an EPUB and MOBI versions from the Internet Archive (auto-converted from my uploaded pdf), but I can't vouch for its quality.

All the fonts are baked into the PDF, so it should be readable anywhere; if it isn't, please submit a bug report!

But if you're looking for a format that lets you reflow the text, by changing the margins or font or text size, you're out of luck. The only way to write something like that is to bake it in from the beginning. That's easy for pure text, but hard to impossible for technical documents with lots of displayed equations, big hard-formatted boxes of text (ie, algorithms), and the like.

(Boaz Barak managed it by writing his Modern Complexity Theory book entirely in Markdown. The mind boggles.)


I don't know what you use to typeset the book, as I can't find any source files, but isn't LaTeX really good at this?

I'm not sure what makes it hard to reflow. The text seems to be mostly paragraphs with figures, inline formulas and footnotes; pretty basic stuff, no?

(O'Reilly's books were famously typeset with Troff/Groff for many years, though I don't know how painful that was or how advanced their typesetting needs were.)

All I know is that I'd love a version that worked on smaller devices. I tried the Kindle version reformatted by archive.org, but it has lost most of the formatting, making it almost completely unreadable.

I'd especially like an HTML version of the book that was fully hyperlinked, with zoomable vector figures, footnotes hidden away in popups, etc.


> (O'Reilly's books were famously typeset with Troff/Groff for many years, though I don't know how painful that was or how advanced their typesetting needs were.)

I beleive that O'Reilly Media has used ASCIIDoc[0] as their preferred internal format for quite a few years. Formulas are supported through ASCIIMthML[1].

More recently, they seem to have transitioned to HTMLBook[2] as a preferred source format for systems like Atlas[3].

[0] ASCIIDoc is basically a Markdown-like (or ReStructuredText-like) format that is feature-equivalent to DocBook XML: http://asciidoc.org/

[1] http://asciidoc.org/asciimathml.html

[2] http://oreillymedia.github.io/HTMLBook/

[3] https://atlas.oreilly.com/


> I can't vouch for its quality

The quality of the EPUB is terrible, it looks like it was OCR'd from images of the pages, not just converted from PDF.

Can you make the source format for the book available?


I use moon reader pro on Android. It allows annotating and highlighting pdf text. I am not affiliated except as a long time user.


It is. Follow the internet archive link.


An automatic conversion to EPUB from PDF or from a scan is not what I was asking for.


Why wish, when you can convert PDFs to EPUB very easily using online converters.


Yes, but the resulting documents often are horrendous when rendered on ereaders and similar.


Yeah i tried doing that. The conversion is so bad with multiple random line breaks in a single line, and some unicode characters not getting converted etc. It was a bad experience.


I love Jeff Erickson's work and would also like to recommend his notes on Computational Topology [1] for further consumption.

[1]: http://jeffe.cs.illinois.edu/teaching/comptop/2009/schedule....


What the heck is the trivial one liner to check who will win a chess game given both players play perfect?


I believe this is a trick question. The "standard" chess board is 8x8, with a 32 possible pieces. This gives you a constant (albeit absurdly large) number of configurations, which means that the one-line solution "Brute Force" is an O(1) algorithm.


I can only come up with a recursive algorithm which depends on a PerfectPlay(state, player) like so:

    WinnerWithPerfectPlay(state,currentPlayer):
      if Winner(state)
        return Winner(state)
      else
        nextPlayer <- "white" if currentPlayer = "black" else "black"
        return WinnerWithPerfectPlay(PerfectPlay(state, player), nextPlayer)
Which doesn't allow for stalemate moves.


I suppose the claim of the existence of this one liner might be the same as the claim that one of the players can always force a win, in which the line is something like "return 'white'" or "return 'draw'"


Does anyone have any impressions on how this compares to CLRS? (https://en.wikipedia.org/wiki/Introduction_to_Algorithms)


By Ctrl+F'ing, I find 5 mentions of the word "master", none of which are the master theorem. I prefer this to CLRS as, while it's a neat trick, it tends to result in a bunch of people memorising the cases (and taking a "because the book told me to" level of understanding away from that part of the course).


Exactly. The students should be the masters, not the theorem.


In particular are there any new materials that has been invented or widely adopted since the older textbooks?

For example, bloom filters were rarely taught maybe 10 years ago but probabilistic data structures are now pretty mandatory in a data structure course.

Just wondering if there are sections in this book covering cutting edge stuff for people already familiar with traditional algo


Not so much in the book itself, but definitely in the "Director's Cut" notes on the book web site. I cover bloom filters and the like in my more advanced algorithms courses.

Teaching that material correctly (without the traditional magical thinking) requires serious comfort with probability, which unfortunately isn't early enough in the CS curriculum at Illinois to be used in our data structures and algorithms courses.


Love his writing style. Accessible yet detailed with fun historical tangents to set context.


This has always bugged me: what reasons outside of convention do we have for preferring O(V + E) to O(E) on algorithms that only make sense in the context of a single connected component? I know this is slightly OT, but tangentially related.


Absolute treasuretrove, thanks for sharing!


After a quick browser it seems a little too academic for daily programmers like myself.

Would love to learn more about practical dynamic programming these days, hope there is a book about that extensively.


Go back and read it - it’s not!


The historic preludes to the chapters are awesome. The dynamic programming chapter starts with an intro to the study of meters in Sanskrit poetry.


I love how he offers the mnemonic http://algorithms.wtf

Bravo, Jeff!


This is a great resource. Though it is dated this month, there seems to have been a mention of it two years ago and a short HN discussion ensued: https://news.ycombinator.com/item?id=12873426


I had the privilege of being an undergrad in Jeff’s class in the late 90s. IIRC, it was his first year teaching. On the last day of class he received a standing ovation. The average grade on the final exam was ~50%. And yet he received a standing ovation. He’s that good.


I don't know why your guys think this book is so good, I don't see any new things out of it nor could I appreciate how he composed the content. It is just one of those random textbooks for this topic.


The graphs chapter is my favorite. It's not very rudimentary like other text books


I managed to get a A- in Ericsson’s 373 as an undergrad (grads took the same class but had their own curve) in 2001. Great class, great experience, great teacher. Pushed me to do algorithms at a difficulty I didnt hunk possible.


Are algorithms useful to learn for a non-programmer? Is there a benefit to thinking through what is presented in a book like this over solving general problems in a day-to-day context?


As a whole, I doubt a book like this would be of much use to a non-programmer. There are some high level tricks that might be fun to learn, and could be loosely applied to day-to-day thinking (should this be solved in a brute force way, or is there some shortcut). Overall the book (I assume, I haven't read it) is technical and precise solutions to technical problems.

Algorithms are just ways of solving technical problems in the most efficient way possible (for various definitions of efficiency). They are deep programming which even most programmers don't use frequently.


Suspected this might be the case, thanks for your thoughts.


Please find something you love doing and learn more about that instead of picking up random things.


Sound advice. Thinking through the scope of a programming problem to come up with an algorithm sounds like a productive way of using your mind, but may be a waste of time if you don't intend on programming.


What if what you love doing is picking up random things ;)


You may be interested in the book Algorithms to Live By ( Brian Christian, Tom Griffiths). I thought it was a fun-to-read book, though I don't think there was any code in the book at all.


The section on dynamic programming is exceptionally clear.


I would appreciate a version of the book with the extended material as well! Is there a way to pre-pay advanced copies of the published book?


Copyright question, the copyright page says draft published 12/29/2018, but the copyright is 2019; doesn't that mean somebody could have taken the work, published as their own as copyright 2018 on 12/30/2018 which would be earlier than the author's 2019 claim? I'm not a copyright expert by any means, but am always troubled when I see "Copyright $CURRENT_DATE" in electronic works, I thought it was supposed to reflect the earliest date of claim.


Oops. I'd say submit a bug report, but as of yesterday it become moot.

As others have said, the copyright notice is only a courtesy/reminder. I've held the copyright on all this stuff from the moment I started writing it (and distributing it) 20 years ago.


No, because copyright law doesn't care what the copyright date says, or even if you write it at all.

As soon as you write something, you have the copyright to what you wrote. If it was challenged in court, he would likely easily be able to show his drafts from prior to 12/29/2018 to prove he was the author.


You don't actually have to place a copyright on original works like this. Copyright is granted whether you put the little c symbol and date or not.


great job and thanks for keeping this free


How does this compare to CLRS? I am neck deep in reading CLRS..


The lecture on Matroids is pretty good.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: