I think Lisp still has an edge for larger projects and for applications where the speed of the compiled code is important. But Python has the edge (with a large number of students) when the main goal is communication, not programming per se.
In terms of programming-in-the-large, at Google and elsewhere, I think that language choice is not as important as all the other choices: if you have the right overall architecture, the right team of programmers, the right development process that allows for rapid development with continuous improvement, then many languages will work for you; if you don't have those things you're in trouble regardless of your language choice.
Although to be honest, there's Clojure, which isn't as easy to learn as Python nor as flexible/powerful as CL, but sits in confortable middle ground.
Not pure, not perfect, but still great!
It's elegant learn programming, history of AI, and simple old school Common Lisp in the same package.
AI methods include GPS, Eliza style chatbots, symbolic math, constraint satisfaction, logic programming, natural language programming.
It's a wonderful book and it even includes some advanced advice on how to speedup Common Lisp code, which I have not found elsewhere!
I think the course to which you refer is on Udacity, with Thrun; I found it easier to read the book.
How will these make me a better programmer ? Is each designed to teach a different programming concept or technique ? Are they useful in fields lower than machine learning or AI ?
What's the benefit of a program that detects palindromes ?
He does this not only in those notebooks but for example also in his "Paradigms Of Artificial Intelligence Programming" book. I have not seen this done to any serious extent in any algorithm textbook or tutorials.
I really like his Python code, but I think his Lisp code from PAIP is extremely underrated. In fact, it makes me sad he switched to Python for some AI tasks (same as Goodman & Tenenbaum in http://probmods.org) given that Lisp offers a unique feature for AI: homoiconicity.
This is something that I think will make a comeback with Bayesian program learning. For a glimpse of the future, see above link.
there are lots of lisp/fp books that aren't as known as PAIP. based on the few I took on ebay (henderson's recursion book and the likes) we might have a lot of surprises.
Where is this published? I couldn't look it up.
His code is highly readable and elegant, and I wish every single person I'll share a codebase with will have studied him before.
I think it's a trait of brilliant minds. They simplify things. I don't remember the exact quote, but it was something like this:
* a fool doesn't understand
* a normal person can be taught
* a in intelligent person can teach
* a genius can simplify
“If you can't explain it to a six year old, you don't understand it yourself.”
I really thing what Einstein is getting at here is the ability to simplify things to the level that a six year old can understand it. Of course, you probably can't simplify the mathematics to the level of a six-year old, but you can simplify the concepts so that they are understandable by a six year old.
I've found this to be true in my own life. I think I understand something, but then I try to teach it to someone else and then I learn that I don't really understand it. So I have to go back and study and think about it so more until I understand it better. If I really understand thoroughly, inside and out, then I find that I am able to simplify things such that I can explain them to my young children.
It is also why teaching people programming makes you a better programmer. Even if you're answering newbie questions, it often makes you re-evaluate your knowledge or realize you don't understand something as clearly as you thought. Even newbies have different needs and interests, so they ask different questions than you would.
We all had moments when we realized how stupid the solution was. Most of the things that eludes us trigger a chase for complexity when it's mostly trying simple stuff that aren't in our neural habits.
By virtue of being better written and more thought out.
Also, they are not "programming challenges". They are solutions to programming challenges.
(I felt a little bad about these reactions because I'd helped with that code. In its defense, I've also been told by a couple of people that they gained a lot by working through it.)
Primarily the problem is that it's a living body of mutable state -- like an Excel spreadsheet, except it doesn't auto-update -- if I poke at some bit of it, and then re-evaluate a cell, some state will change, probably. And then maybe other cells' outputs will be invalid. Hmm, maybe I should re-evaluate the whole notebook? Oh, but some of the cells contain shell commands, for example to download a dictionary file from norvig.com. Do I want to do that? Maybe I should just undo the change I made. The ipynb is in git of course. But instead of a nice diff showing lines of code, the python code is held inside some JSON document.
Ipython notebook is beautiful technology, the front-end is really-well done. But accumulating all that mutable state goes against all my instincts as a programmer. I prefer editing .py files in my text editor, with Make when necessary. And then creating output from input with a single, transient, execution.
- Write all my python code in .py files as usual.
- The Jupyter notebook contains calls to functions generating graphical/HTML/etc output, nothing else.
- Either use one of the various `reload*` functionalities to update the state of the persistent python process, or use `ipython console --existing` to create a conventional ipython shell sharing the same kernel as Jupyter and evaluate code in there.
basically 1) click the link 2) click clone 3) sign in (any gmail, msft id), then go to the notebooks directly and run them. You can also click Terminal for shell prompt.
The sign-in is required since you can edit the notebooks, store/modify data etc.
For example, going through his notebook for code advent (the first linked) .
You learn to build a lisp interpreter in about 100 lines of python (you miss a few features like macros, but it's still quite complete). Very nice to learn how language interpretation works
Obviously looking at resources to learn approaches to solving the problem isn't cheating, but there is some value in spending time just trying to figure out an approach before doing that.
For me, the answer to all three questions is - yes.
It depends on your previous knowledge, so some problems might seem too easy and/or some other problems might be too hard.
If you are an experienced programmer, maybe use AoC to learn/practice some language you didn't use before?
Norvig wrote some of the Python version https://github.com/aimacode/aima-python
source: was a contributor
For instance: https://github.com/norvig/pytudes/blob/master/py/SET.py
Why the CAPS name? Why are we defining `test` if we’re just running it on the next line - or rather, why are running the test in that same file as it’s definition, and then running an example 100k times?
Any professional python developer will know this, but when an adventuring friend calls me to help him debug something, with code modeled on what he’s seen in examples like these, I’ll likely just throw my hands up.
On the flip side, this isn’t bad for showing how something works, and the cognitive overload for a beginner to “get to the meat” if he’d used `argparse` or `click` might be a bit high.
Defining a function and immediately calling it prevents pollution of module scope. Without it, `photo` would be exported.
The tests are in the same file because it makes the example self-contained, I assume.
The experiments are run 100,000 times because they have a random component and this exercises the system.
The name `SET` conveys little. Running the exercise in the same file is OK as it’s not in a module, but again - the work demonstrates algorithms well enough, but I can’t see this coding style as “smart” or “clean”. It’s not.
Game of Set (Peter Norvig 2010-2015)
It's the name of the problem being solved.
A professional python developer would also know the difference between code written for exposition purposes, code written to validate an idea, and code written for production use. They are not identical, and should not be. The concerns and aims of each code are different and they optimize for different things, and that is reflected in the coding style.
>but when an adventuring friend calls me to help him debug something, with code modeled on what he’s seen in examples like these, I’ll likely just throw my hands up.
I doub't changing the capitalization of a variable renders you unable to help. Seems like something that you could trivially filter out, and use it as an opportunity to educate.