I have had this project on the go for a while and have decided to put it out as-is because I can't forsee being able to complete it any time soon. For people who go straight to the comments, here's a video - https://www.youtube.com/watch?v=9XPE4uT0AdE - I have a very mumbly British accent but you should be able to get the gist of it...
The one line summary is - it's a "real" implementation of some of the demo GIFs in Bret Victor's "Learnable Programming" 
The original project goal was to create a "bicycle for the mind" that actually has a provably positive effect on people's performance in algorithm competitions like Google Code Jam/Facebook Hacker Cup. I don't think I got there, but I think there's enough stuff here to interest people on a Sunday afternoon...
Constructive criticism greatly appreciated, here or on Github issues page. If you are experienced with CPython (as in, experienced in contributing to the actual interpreter), then I would appreciate input on how to deliver the "recording" aspect of the project properly. The current implementation is very hacky.
(I had a vague idea to do something similar for a specific program that has a reputation for being hard to debug (TeX); will definitely look to your code for inspiration.)
Have you looked at Python Tutor (http://www.pythontutor.com/)? That's another excellent project that has some overlap — e.g. you may look at its insertion sort example (go to http://www.pythontutor.com/visualize.html click on "Show example code and courses" and click on "Insertion sort"). If you have looked at it and can summarize the differences in approach between your project and that one, it would be highly useful.
Thanks for an inspiring project! This is so obviously the right way to do things that in hindsight it's a shame we're not already using it. :-)
You'd also probably get a lot of good feedback from competitive programmers.
But I still wonder if anyone writing actual algorithms would find this useful, and if so what exactly they'd find useful. I'd really like to see the field of UIs for programming get clearer on what data is useful to show to which programmers in what form and when.
My plan was basically to work my way through past problems, for each one thinking specifically about that problem - ok, what's hard here, what would help me solve this, where did I go wrong with this, then adding those features to Algojammer.
Then when Code Jam comes around again, I can really test it in a very falsifiable way.
I need a break and more time so I'm not going to be working on this for a while but I'm 100% on your side. Do you have any links to your research?
Also, I wish there were more research labs that could support folks in UIs for programming like this. Right now there are only a couple and a lot of independent people dabbling.
As for my work, you can see it at http://glench.com
When I was designing a complex, topology-transforming graph / search algorithm in C#, I had to cobble together my own "Algorithm Debugger" to fix some particularly tricky defects. My algorithm emitted a series of messages as it ran, which triggered viewport updates in different tabs, showing coroutine state, intermediate graphs, searches, etc.
It took me a month to write, and it was mired with bugs. Starting with Algojammer I probably could have knocked it out in a couple days.
There are so many ways this could be developed further, I love it!
I did half implement "Markers" which are coloured lines on the timeline similar to breakpoints.
I decided to shelve it until I had rewritten the whole thing to be more stable, which I think will require doing it as a fork of CPython.
What I really like about your implementation is that it's scratching the itch of abstraction code (i.e. the inherently non-visual), as opposed to something like Light Table's WebGL example (which is very impressive, but not applicable to most developers' day jobs).
I think one of the reasons games developers are so ahead of the field in making the abstract visible is they have a "canvas" right there that they are outputting to. It's very common to see really clever and intuitive visualisations covering the screen in game development, not because those guys are visionaries but because they are just trying to understand the code and that's how they naturally express themselves.
Basically what I wanted to do was come up with a solution "one step up the ladder of abstraction" (is my BV fanboy showing?) that let's users define their own, problem specific visualisations. This is where the concept of metacode came from. If you give people access to the data (omniscience), a blank canvas (sheet), and a way to express themselves (metacode) then natural curiosity should take over.
I'm not saying I achieved all that, but that was the idea. I think there's more work to be done on the language of describing the code, the current vocabulary is based around line numbers and execution steps which are brittle when the code is changed. What you want is visualisations that persist and change as you edit the code.
Is this heavily coupled with python? Or can it be extended to any other language like C++ or Java easily?
Either Chris or any of the Lispers hanging around here, can they throw deeper insight at the following question/assertion: Isn't what Brett Victor proposing 'visual programming' on top of a live REPL?
> Programming has to work like this. Programmers must be able to read the vocabulary, follow the flow, and see the state. Programmers have to create by reacting and create by abstracting. Assume that these are requirements. Given these requirements, how do we redesign programming?