I would probably recommend studying individual classes of algorithms, and to only move on when you feel ready, as opposed to learning algorithms in ascending difficulty (at the risk of learning them in a haphazard fashion).
VisuAlgo is another cool site that has lots of algorithm visualizations...
http://visualgo.net
What you will probably find is that it's more valuable to gain experience designing your own algorithms using tried-and-true techniques such as dynamic programming, greediness, divide-and-conqur, linear programming, etc. Also keep in mind that data structures are closely linked to algorithms, and vice versa. If you are not familiar with big-O notation I suggest you begin there as it can be used as a measure of both an algorithm's time complexity and its data complexity.
One algorithm a day -- the short answer is if you were going to really learn them and not just forget them then you would have to cheat and constantly review previously 'learned' algorithms as you go. Otherwise you will learn X algorithms and likely forget nearly X algorithms.
I have to question the value of only focusing on learning algorithms and on the idea of optimizing the quantity.
In terms of learning lots of them, it might be more useful to focus on learning more fundamental algorithms _better_ rather than tons of them. Or you might want to carefully select the most generally useful algorithms or ones in a specific field relevant to current projects.
Also, now that we have such powerful general purpose languages and efficient module versioning and distribution, learning to take advantage of those probably has more practical use.
For example, you could spend several weeks or years learning various statistical methods and algorithms for machine learning in a particular area. But then you realize that all of the algorithms are already implemented in python code libraries so you start learning how to apply the libraries in real code for applications rather than reimplementing the libraries.
But then you find out that deep learning techniques far outperform all of those algorithms you learned to implement and then apply via those libraries.
So then you train yourself on sophisticated deep learning techniques and start to implement LSTM in python. Then you realize you never quite got the fundamental understanding of neural networks so go back to work on learning that better.
Then you implement some core neural network algorithms in python and start to build back up to your LSTM implementation. Now you find out that TensorFlow exists but lacks good support for AMD which your university has (perhaps erroneously) made a large investment in.
So then you decide the best thing to do would actually be to try to fix some basic bugs that occur on your platform with the latest TensorFlow/OpenCl/AMD code (or whatever).
You manage to fix one of the minor issues and now several geniuses have their day improved by a bit or two.
The point is, trying to learn a ton of random algorithms in a short period probably isn't the best use of your time.
Last year was Advent of Code ( http://adventofcode.com/ ) and while they did not name them, each exercise was based on a known algorithmic problem (knapsack, traveling salesman, ...), and it was a fun way to engage people in a "1 exercise a day" pattern.
I guess you can also try your hand at CodinGame's puzzles ( https://www.codingame.com/training ) as they also involve known algorithms and they are realy fun to play.
But ultimately, both of these resources won't teach you how to implement algorithms.
You might try The Algorithm Design Manual, the second half of the book is a giant index of algorithms. The first part is a guide on what algorithms you should pick depending on what you're doing.
I second this. I particularly like how they have an in depth section on how to mathematically derive the O complexity of code, and how to build up the proof from there. It't not necessary, but a fun exercise in learning how different algorithms get their complexity defined.
Just read The Art Of Computer Programming by Knuth very slowly. There's basically a different algorithm every other page. Of course, it will still take you years.
Donald Knuth says he usually writes his book 1 page a day[0]:
Usually when I write a book I finish about one page a day, I mean if the book is 365 pages long it takes me a year, if it's 500 pages long it takes me a year and a half.
I'd say you'd be far better off deriving your own algorithm every day.
Make something to solve a real problem every day of your life and you'll be far better at solving problems then other people. I'd rather be able to do that then just parrot back sorts, graph traversals, and what not.
Depending on your level of programming ability, one algorithm a day, IMHO, is completely doable. A number of comments and suggestions say that one per day is an unrealistic goal (yes, maybe it is) but the idea of setting a goal and working through a list of algorithms is very reasonable.
If you are just learning programming, plan on taking your time with the algorithms but practice coding every day. Find a fun project to attempt that is within your level of skill.
If you are a strong programmer in one language, find a book of algorithms using that language (some of the suggestions here in these comments are excellent). I list some of the books I like at the end of this comment.
If you are an experienced programmer, one algorithm per day is roughly doable. Especially so, because you are trying to learn one algorithm per day, not produce working, production level code for each algorithm each day.
Some algorithms are really families of algorithms and can take more than a day of study, hash based look up tables come to mind. First there are the hash functions themselves. That would be day one. Next there are several alternatives for storing entries in the hash table, e.g. open addressing vs chaining, days two and three. Then there are methods for handling collisions, linear probing, secondary hashing, etc.; that's day four. Finally there are important variations, perfect hashing, cuckoo hashing, robin hood hashing, and so forth; maybe another 5 days. Some languages are less appropriate for playing around and can make working with algorithms more difficult, instead of a couple of weeks this could easily take twice as long. After learning other methods of implementing fast lookups, its time to come back to hashing and understand when its appropriate and when alternatives are better and to understand how to combine methods for more sophisticated lookup methods.
I think you will be best served by modifying your goal a bit and saying that you will work on learning about algorithms every day and cover all of the material in a typical undergraduate course on the subject. It really is a fun branch of Computer Science.
A great starting point is Sedgewick's book/course, Algorithms [1]. For more depth and theory try [2], Cormen and Leiserson's excellent Introduction to Algorithms. Alternatively the theory is also covered by another book by Sedgewick, An Introduction to the Analysis of Algorithms [3]. A classic reference that goes far beyond these other books is of course Knuth [4], suitable for serious students of Computer Science less so as a book of recipes.
After these basics, there are books useful for special circumstances. If your goal is to be broadly and deeply familiar with Algorithms you will need to cover quite a bit of additional material.
Numerical methods -- Numerical Recipes 3rd Edition: The Art of Scientific Computing by Tuekolsky and Vetterling. I love this book. [5]
Randomized algorithms -- Randomized Algorithms by Motwani and Raghavan. [6], Probability and Computing: Randomized Algorithms and Probabilistic Analysis by Michael Mitzenmacher, [7]
Hard problems (like NP) -- Approximation Algorithms by Vazirani [8]. How to Solve It: Modern Heuristics by Michalewicz and Fogel. [9]
Data structures -- Advanced Data Structures by Brass. [10]
Functional programming -- Pearls of Functional Algorithm Design by Bird [11] and Purely Functional Data Structures by Okasaki [12].
Bit twiddling -- Hacker's Delight by Warren [13].
Distributed and parallel programming -- this material gets very hard so perhaps Distributed Algorithms by Lynch [14].
Machine learning and AI related algorithms -- Bishop's Pattern Recognition and Machine Learning [15] and Norvig's Artificial Intelligence: A Modern Approach [16]
These books will cover most of what a Ph.D. in CS might be expected to understand about algorithms. It will take years of study to work though all of them. After that, you will be reading about algorithms in journal publications (ACM and IEEE memberships are useful). For example, a recent, practical, and important development in hashing methods is called cuckoo hashing, and I don't believe that it appears in any of the books I've listed.
https://projecteuler.net/ is an excellent tool for developing algorithmic thinking. From the project description:
The problems range in difficulty and for many the experience is inductive chain learning. That is, by solving one problem it will expose you to a new concept that allows you to undertake a previously inaccessible problem. So the determined participant will slowly but surely work his/her way through every problem.
I disagree with this recommendation; Project Euler is focused on mathematical questions that don't teach a type of thinking very similar to the thinking used while constructing most algorithms.
Pretty much in the beginning of translation to English, but the original resource in Russian is a trove of information on algorithms. Suits well for the "one also a day" learning format.
Code Wars is my favorite. The earlier ranks are pretty simple but there's a broad depth of challenging problems at the higher levels. Online editor and test runner verifies your solution and the community aspects are a big win.
You might be interested in Programming Praxis https://programmingpraxis.com/, a blog that posts a few exercises per week, with solutions. Some of them are more interesting, some of them less so.
If you want to learn an algorithm I suggest implementing it as well. Since this question does not state any prerequisites I can only suggest the hands down best practical datastructure and algorithm study and reference book I've come accross:
http://infolab.stanford.edu/~ullman/focs.html
It's old but concise and very much to the point. All of the material is highly practical.
Aside from agreeing with other answers (you can't... and probably wouldn't get a good return from it)...
https://www.hackerrank.com/ has a lot of great algorithm challenges. They won't teach you how to do it but you need to learn the algorithms to solve the problems.
Go for http://www.spoj.com/, every question there needed an algorithm and doing one question per day will help you in getting better and better in algorithms per day.
Hackerrank! I am currently taking an Algorithms course at Uni and I have found HackerRank's questions to be a great way to practice your coding skills. It can also help you out for recruiting.
You can't. You may be able to consume the knowledge underpinning an algorithm and parrot it back but any attempt to learn it in one day is doomed to failure. Parrot knowledge has zero retention.
There's truth to this of course, but I do find that self-imposed checkpoints are a great source of motivation. Rather than 1 algorithm/day, a better plan (in my experience) would be to find a nice textbook, and schedule a chapter/month (depending on the density of the chosen text) with flexibility.
because "learning the shit out of any one algorithm" is not a very well-defined goal and is at best, a locally optimal strategy. If the OP's goal is to become literate in a many topics, then learning those topics in broad strokes may be better than learning all the details up front.
There often can be a vast literature on any one given topic or algorithm. Are you suggesting the OP acquire researcher-level expertise in everything? Everybody, at some point while learning, makes the decision as to whether or not they understand enough for their own purposes, and then chooses to dive deeper or move on to something else.
Not to overuse buzz terms, but the idea of repeatedly grazing over new concepts without ever gaining a firm grasp of them is a textbook local max way of learning.
Going deep on one algo at a time not only gives you a grasp of that individual algo; it gives you insight into computer science as a whole.
Not to mention that over time it would also look like broad strokes
Taking up a new algorithm each day is the best way I can think of to be sure you don't know any algorithms well. Learn what the various algorithms are, what problems they solve, and what the general classifications of them are, then move on. You are then empowered to dig deeper into the correct one once you have the actual need for one, you know if it exists or not, and how to begin.
http://jeffe.cs.illinois.edu/teaching/algorithms/
I would probably recommend studying individual classes of algorithms, and to only move on when you feel ready, as opposed to learning algorithms in ascending difficulty (at the risk of learning them in a haphazard fashion).
VisuAlgo is another cool site that has lots of algorithm visualizations... http://visualgo.net
And, if you can handle the dude's voice, I recommend checking out Xoax.net's algorithm videos... http://xoax.net/comp_sci/crs/algorithms/index.php
What you will probably find is that it's more valuable to gain experience designing your own algorithms using tried-and-true techniques such as dynamic programming, greediness, divide-and-conqur, linear programming, etc. Also keep in mind that data structures are closely linked to algorithms, and vice versa. If you are not familiar with big-O notation I suggest you begin there as it can be used as a measure of both an algorithm's time complexity and its data complexity.