
Live Coding in CoffeeScript - bengl
http://bryanenglish.com/2013/02/18/live-coding-in-coffeescript.html
======
jbaudanza
I love this! It would be cool to see this integrated with something like
share.js so people could collaborate.

~~~
bengl
Thanks! I hadn't seen share.js before. I'll have to look into it.

------
rjmarsan
I'd recommend using Timbre.js <http://mohayonao.github.com/timbre.js/> As much
as I love audiolet, Timbre.js knocks the socks off of it. It's very much based
off of supercollider, which is really THE language to live-code in.

Also, it's way more fully developed than audiolet, and seems to work way
faster.

------
benmanns
In case anyone is having this problem: Press ESC to run the scripts.

------
yaxu
Great stuff!

See also "livecodelab" and "gibber":
<http://www.sketchpatch.net/livecodelab/index.html> <http://www.charlie-
roberts.com/gibber/>

Gibber is multi-user live coding in the browser.

There are many more live coding systems here: <http://toplap.org/>

------
shurcooL
I don't mean to criticize, I'd simply like to clarify something.

What exactly makes this "live" coding? As far as I can see, you have to press
Esc to run the code. If this is considered live, what would be a non-live
equivalent?

My understanding of live was that you see/hear changes as soon as you change
the code, but I didn't find that to be the case here.

~~~
seanmcdirmid
As far as I can tell, live coding is not about programming with "live"
feedback, it is about programming "live" in front of an audience, or otherwise
using programming as a musical instrument. Think performance and not
development.

Very confusing, I feel like I'm fighting a losing battle since live coding
sounds so similar to live programming, but the terms mean completely different
things and arose in different contexts (live programming arose in a pedagogic
context, live coding arose in a performance context) at about roughly the same
time.

~~~
yaxu
Sean, I wish you'd take a closer look at live coding before saying this kind
of thing, you've very much got the wrong idea about live coding. I'm just off
to work but I'll send you some references later.

Live coding is _absolutely everything about_ live feedback. This gives you
access to performing in front of a live audience, but that is only part of it.
It also allows new styles of collaborative programming, and exploring of ideas
in composition.

Live coding is about using programming languages to manipulate running code,
maintaining state (if there is any).

Check out the code in the "hacking perl in nightclubs" article that this one
links to. The software it discusses takes on live updates without restarts or
losing state (in fact, the code in the editor is part of the state). It also
has an option to take on edits every keypress, but it's off by default,
because it's impractical. Sometimes you just don't want 1 and 10 to be
interpreted on the way to 100.

Live coding arose in an interdisciplinary context of music performance,
pedagogy, media theory, psychology and computer science.

The thesis you link to is really excellent, but has shared roots with live
coding and is referenced in the live coding literature.

~~~
seanmcdirmid
Alex, I've ready up on as much on live coding as I could, I've even read parts
of your thesis. I really like the work, but the intention of the work is
completely different.

> Live coding is absolutely everything about live feedback. This gives you
> access to performing in front of a live audience, but that is only part of
> it. It also allows new styles of collaborative programming, and exploring of
> ideas in composition.

This really isn't the point. The "live feedback" is not about performance, its
not about new styles of collaborative programming, its not about composition
of time-based media. It is simply a better way to debug your program, it is
simply better feedback to the programmer while they are writing their program.

> Live coding is about using programming languages to manipulate running code,
> maintaining state (if there is any).

You've stated this differently before:

> For me, live coding is about live, end-user programming of time-based media.
> Live coders write software for themselves, and the timeline of software
> development is often shared with the end result. This brings both social and
> creative aspects of programming to the fore.

Now, imagine my frustration when I'm trying to promote a new way of debugging
a program, and people say "we've heard about this before, but we aren't very
interested in music." Do you understand why I'm frustrated? Live programming
aims to change programming in general, you should be able to live program a
web app, a map reduce program, an operating system, a compiler, anything! This
is not the goal of live coding as I understand it.

> Check out the code in the "hacking perl in nightclubs" article that this one
> links to. The software it discusses takes on live updates without restarts
> or losing state (in fact, the code in the editor is part of the state).

But you see, that's where our disconnect is: LP is not just about updating
code without losing state, its about changing the entire program's state as if
the new code HAD ALWAYS existed. Simply preserving state is not good enough.
Now in a dataflow language (like quartz composer), there is no state and you
simply re-execute the new wiring; very easy. But if there is state involved,
the problem becomes much harder. The live coding crowd hasn't done anything to
solve that problem yet, and in fact, they don't really need to; it seems like
preserving state is sufficient for your use cases.

> It also has an option to take on edits every keypress, but it's off by
> default, because it's impractical. Sometimes you just don't want 1 and 10 to
> be interpreted on the way to 100.

Did you ever watch the quicktime videos embedded in my 2007 paper? I did just
that ('1', '10', '100') and the effect was quite nice; I got applause right
there when I gave the talk at Onward. It is definitely a UX challenge, and
Chris's discussion about steady frames, I think, is the key to a decent live
programming experience. Also, Bret Victor's demos have no delay in them
whatsoever; continuous feedback can be nice and exciting. Of course, we need
to do a lot of UX and implementation work before it is actually practical. The
live coding community has working systems today, you guys already have what
you need; we don't, we just have a bunch of prototypes and demos (unless we
count visual languages where liveness is easy).

> Live coding arose in an interdisciplinary context of music performance,
> pedagogy, media theory, psychology and computer science.

Sounds like a nice focused story to me :)

> The thesis you link to is really excellent, but has shared roots with live
> coding and is referenced in the live coding literature.

The important point to make about Chris's thesis is that it is very close to
Bret Victor's demos and goals as stated in his learneable programming work.
That this is a new way to understand code while we are writing code, the story
is very narrow, nice, and easy to understand; there is no reason to expand it
in other ways.

~~~
yaxu
Hi Sean, yes there may well be different intentions at play, the but also huge
overlaps on the technical level. We are all concerned with liveness, and your
debugging is my (and I think Bret Victors) exploratory programming.

I can understand your frustrations, lets talk by email about establishing a
clearer distinction between performance and more general contexts, if you
don't mind. But I think not on the basis that they are completely unrelated,
to me it makes sense to think of live coding in social situations (including
lectures, dojos,conference talks as well as a/v performances and group music
making) is an application for live programming languages.

I can agree reinterpretation per key press is useful in some very particular
circumstances, but not others. The difference is not in delay, but in what
constitutes an edit.

~~~
seanmcdirmid
I mostly agree with you, but I think their will be some confusion until we
work this out. I'm not sure what Bret Victors refers to his work, but I found
the similarities between his work and Chris's thesis to be very striking even
though I'm sure they are independent. Perhaps we can harmonize or at least
figure out what the key messages are to avoid confusion.

I think you want to provide feedback for each discrete or continuous action by
the user. This is quite easy if we are relying on structured editing, but much
harder for free form text. My current policy is to re-execute memoization
units after each keystroke, though certain syntax and type errors can prevent
the executed expression tree from changing (but then the feedback is a
syntax/type error!). Getting live feedback also requires toleration to
transient syntax/semantic/and execution errors, and to be fairly efficient. In
this case, we should tolerate and react to '1' and '10', even if they lead to
a program in a weird state (e.g., x - N, where N should be > 100), but you
still want to show some kind of change. And this is where live programming is
very different: the steady frame that we are viewing while editing should
always CHANGE when we edit, meaning it should be focused on some execution
result of the code being editing. Doing something without immediate feedback
is being a sad panda.

~~~
yaxu
Yes great, lets try to work this out - you should have an overlong email in
your inbox from me as a starter :)

I think I disagree with your definition of "edit". Perhaps this comes down to
"chunking" in human perception and action. When I type 100, I don't
consciously instruct my fingers to type each character, what I enact and
perceive is the number "100". From this perspective, it seems natural for the
programmer to control what an edit is.

I've heard VI users talk about edits in terms of breathing. You hit 'w' (or
whatever it is they do), type in your edit, then hit escape when you're done.
They describe this as natural in terms of breathing rhythm.

Of course in a performance context there is a more concrete requirement that
edits need to be timed to happen in a certain way, and everything that happens
it part of the output, so you want complete control over what gets interpreted
when.. The temporal relationship between programming and output is different
from the debugging case which you are more interested in. Both cases are
concerned with liveness, but the constraints are different.

~~~
seanmcdirmid
I program with an IBM Model M keyboard so I can here every keystroke. If you
type fast enough, the programmer won't notice intermediate feedback going
through 1 10 100. You could even put in a delay but I would hope its not
necessary.

As for liveness, I think we disagree on what should be live. I only really
care about the feedback loop between the executing program and the programmer
editing that program. I don't expect anything else to be live, and actually,
it might not even be useful in some cases (e.g., if the program is executing
in real-time and is interactive or animated, I have nothing steady to shoot
at).

~~~
yaxu
Nice point about your keyboard, but I'd still contend that a series of fast
keystrokes is perceived as a thrum, not necessarily a series of clicks.

If you're programming DSP, then the haptic upper limits of typing is much
slower than the limits of aural perception. But I understand more about what
you mean by "steady frame" now.

Yes I think you're right in our point of disagreement. In Chris Nash's terms,
you're interested in liveness solely in terms of manipulation-driven feedback
loop, and I'm interested in it predominantly in terms of performance-driven
feedback loop, and only secondarily in manipulation-driven feedback.

The diagrams at the end of this paper might clarify:
[http://www.eecs.umich.edu/nime2012/Proceedings/papers/217_Fi...](http://www.eecs.umich.edu/nime2012/Proceedings/papers/217_Final_Manuscript.pdf)

(I've seen a more detailed version of this somewhere, maybe his PhD thesis)

------
mos2
This is pretty awesome. I went to a workshop by ixi lang creator thor
magnusson. It is pretty amazing the level of abstraction you get with ixi lang
but supercollider has a steep learning curve.

I love that live coding music is making its way to javascript / coffeescript.
I can't wait to see what people create with it.

------
roryokane
This is a great idea – a convenient environment for writing music-generating
code. It needs far more helper functions and documentation before it’s
generally usable, though.

I’ve written some GitHub Issues: <https://github.com/bengl/beatsio/issues>

------
1wheel
Cool! I threw together something similar a few weekends ago:

<http://roadtolarissa.com/synth-scales/>

------
cliftonk
Have you ever played with Overtone? It's pretty awesome
<http://vimeo.com/22798433>

------
gojomo
See also 'bytebeat':

<http://canonical.org/~kragen/bytebeat/>

------
canibanoglu
Great stuff! If only I could find a way of doing this the other way around...
(audio to sheet music)

~~~
sil3ntmac
There's plenty of libraries for this, provided you can export your audio to
MIDI. Here's one:

midisheetmusic.sourceforge.net

Hell, just upload your MIDI to hamie.net and they'll do it for you (albeit
horrendously).

~~~
canibanoglu
Ah, I should have been clearer. MIDI to sheet music conversion is trivial.
What I want to do is take real audio as input and convert that, which is
according to my readings really hard because MIDI and audio formats
(FLAC/WAV/MP3 etc) are very different.

There are some tools that I could find but so far none has come up with good
conversion from FLAC or MP3.

I'm mainly interested in classical music and that kind of complicates things.
As the number of voices in your audio increase, it gets that much harder to
export it to MIDI. I've been thinking about emphasising the melody line and
exporting that to MIDI but so far nothing :P

Perhaps a different approach would work better.

But thanks a lot for the help :)

~~~
yaxu
The research field trying to tackle this is called "music information
retrieval", that may help with your search.. I think they have made some
headway but it involves quite a few challenges - auditory scene analysis,
source separation, pitch labelling etc.

~~~
canibanoglu
That is extremely helpful, thanks a million for the pointer! :)

------
sil3ntmac
This is awesome! Someone edit the title though because I had no idea this was
live coding "beats".

------
fananta
Awesome stuff Bryan!

