Project Euler has good problems, but they are heavy on the math and that does require some domain knowledge.
I'd be more happy with a book that has 50 fairly simple apps to build in a simple language like Python where the apps aren't a whole lot longer than tic-tac-toe (~1/2 page).
I like how people's perception of Project Euler depends on their background: programmers tend to say it's math heavy, mathematicians tend to say it's mostly about programming.
I'd like to mention that Project Euler states that they'd prefer that people not publish the solutions publicly:
> I learned so much solving problem XXX so is it okay to publish my solution elsewhere?
It appears that you have answered your own question. There is nothing quite like that "Aha!" moment when you finally beat a problem which you have been working on for some time. It is often through the best of intentions in wishing to share our insights so that others can enjoy that moment too. Sadly, however, that will not be the case for your readers. Real learning is an active process and seeing how it is done is a long way from experiencing that epiphany of discovery. Please do not deny others what you have so richly valued yourself.
I did twenty or so Euler problems in Python a while back. I just started doing a few in C# to brush up for an interview. Forcing myself to write working programs is a really valuable complement to reading through C# in Depth; indeed, it's probably the other way around.
Although I agree with another poster that Euler is a bit math heavy, being able to submit a single number to check your solution is handy. Google's foobar recruiting puzzle system actually tests your program, but it's limited to Python and Java.
>SQL Query Analyzer - A utility application which a user can enter a query and have it run against a local database and look for ways to make it more efficient.
Non-trivial if "look for ways to make it more efficient" means non-trivial suggestions (ie. not "add moar indexes").
Thanks for sharing. I'm putting together materials for a course on data structures that I'm teaching this fall.
I have the theory part covered, but I'm always on the lookout for things that could make for decent practical assignment problems - reasonably attainable in scope, but without being too contrived.
True, and when talking about "numbers" one would generally include 3/7 and maybe e, negative numbers and perhaps complex numbers... or at the very least negative integers...
besides, you may as well consider 1 a prime factor. the task is just a silly programming toy problem. 1 not being prime is pure definition, including 1 in the primes would not materially change the definition of primes. just make a lot of advanced math tedious to write down.
The Real numbers do not have any primes. Every real number is a unit.
EDIT: Because it gets brought up so often, I want to respond directly to the 1 being prime point. Saying this is true purely by definition is only true in the highly technical sense that we could make the definition whatever we want.
One of the core properties of primes is Euclid's lemma, which states that if p is a prime and p divides xy then p divides x or p divides y. Symbolic, this is p|xy => p|x OR p|y.
Indeed when we generalize the notion of "prime" from the integers to arbitrary rings, this "lemma" becomes part of the definition. In fact, even "irreducible" elements, which is a strictly weaker requirement than prime elements, excludes units (such as 1).
The property "p|xy => p|x or p|y" does in fact hold for p=1. (The conditions on the LHS and RHS are then true for all x,y.) The simplest formulation of the condition for irreducibility (p=qr => q=1 or r=1) is also true for p=1.
Taking 1 not to be prime is, none the less, pretty clearly better than the alternative. Here are a few examples of neat things that would break if we took 1 to be prime:
When p is prime Z/pZ -- i.e., "integers mod p" -- is a field, meaning a thing in which you can do +,-,*,/. This would be false if we allowed p=1, though admittedly there's a not-too-crazy way you could change the usual definition of "field" that would fix it.
The "fundamental theorem of arithmetic": Every positive integer is the product of primes and this product is unique up to rearrangement. This would be false if we took 1 to be prime, because you can always add or remove 1s from the factorization.
The famous Riemann zeta function (zeta(s) = sum over positive integers n of n^-s) has a representation as an infinite product: zeta(s) = product over prime p of (1-p^-s)^-1. For a little example of why this is nice, note that it provides an instant proof that there are infinitely many primes, because as s->1 zeta(s)->infinity, which would be impossible if that were a finite product. This would become nonsense if we allowed p=1 because one factor would then be 1/0.
Prime numbers all have exactly two divisors (1 and p). This would become false if we took 1 to be prime.
If you changed the definition of "prime numbers" so that it included 4, but was otherwise the same, you could also change other statements and definitions to account for this.
It would be entirely unreasonable to say that 4 might as well be considered a prime number because of this.
Great list for inspiration! Though, it is not true that all the problems can be solved in any programming language: try to make a "Bandwidth Monitor" in Coq.
Coq's not really a programming language, is it? It's a proof language. So, you could prove the correctness, or perhaps possibility, of a bandwidth monitor in Coq, given certain assumptions, but it would be up to some other poor schmuck to actually write it.
Well, what constitutes a programming language then? It sounds like there is a clear definition :-)
Coq is certainly a programming language, as you can build programs. But you can't interact with the subsystem, you can only build pure functions (Therefore it is quite hard to build a performance monitor).
In my opinion English could also go as a programming language. Especially given the rise in NLP products. You can actually program a computer to do quite advanced tasks solely through natural languages.
"Well, what constitutes a programming language then? It sounds like there is a clear definition :-)"
Some bloke called Turing had a few ideas on a definition. (I'll whisper "whoosh" here - for my own note)
Given that any programming language can be defined in English (or any other human language, with a few contortions as required, if any) then of course English can be used to create a Turing Machine (TM). It doesn't take much NLP to define a full TM via speech.
Given how simple a Turing Machine can be, I seriously doubt (without any proof) that coq is not able to recreate a TM.
I am not an expert but the explanation about Turing Machines I read was in GED and involved a simple tape and a slack handful of symbols and operations and some basic hand waving. If coq can't manage that then it is unlikely to be useful - which I doubt.
You might add in some basic (useful) encryption like RSA. There's a running joke in the CS department at my university that every other class has to teach RSA, so it is not incredibly difficulty to do.
Thanks for this - I'm just beginning on the long journey of learning programming, and have been looking for challenges to set myself now that I have (some) of the basics down (but large, practical projects are still somewhat daunting).
I'd like to make a wikipedia-of-code where each page corresponds to a "project" like the ones listed here. The project entry would look like a Jupyter notebook, and you can edit/execute the code freely.
I don't know if I'd be courageous enough to run random internet code server side, but I know there are some people that do. I was thinking about trying to get a lisp-to-javascript compiler, then use it to compile a lisp-interpreter-written-in-lisp to javascript, then the notebook would just be a fancy client-side lisp REPL.
The part you might find interesting is how it treats pages as modules you can import (from other pages), and integrates that with built-in version control (though I never got to the UI for the version control).
Yeah, running it all client-side makes more sense. See my profile info if you ever feel like chatting about it.
I've actually been working on this while trying to help a friend learn programming.
I've been looking for some meatier projects that still have good feedback loops, without requiring too much domain knowledge.
For example, I've been working through a project with the friend that involves scraping prices off of a website and trying to build a thing to automatically order things off of the websites.
Multiple distinct parts, each with their own, very visible, success state. But at the same time, not too many challenging domain specific issues (main issue was just explaining CSS selectors and form POSTs).
To my own question, I'll add the 99 Problems for Scheme, Prolog, Ocaml, Haskell, Scala, and various other languages, as mentioned elsewhere on this thread.
They contain a surprising amount of basic computer science in very short questions. Master them and you'll be well on your way to understanding functional (or declarative, in the case of Prolog) programming.
Codewars provides (mostly) bite-sized coding challenges in a wide range of programming languages. Each challenge is ranked by difficulty, and, once completed, you can view other user solutions for learning and comparison (which other similarly-designed sites do not allow).