

The IDE as a value - mcrittenden
http://www.chris-granger.com/2013/01/24/the-ide-as-data/

======
krosaen
I like this trend towards values instead of objects, Rich's talk is really
worth watching too <http://www.infoq.com/presentations/Value-Values>. Using
this approach in UIs is stretching the idea even further into a realm where I
previously thought OO had a sweet spot.

I still like to convince myself, though, as to why this approach makes sense
over the traditional OO / encapsulation approach. My current line of reasoning
is that while having well defined services and interfaces can help organize a
software system, the actual arguments and return values of these end points
are really better thought of as values - and the expected structure of these
values can certainly be part of the API. You just don't force people to
construct and destruct objects for everything you are passing around.

A couple of related resources on this topic:

"Stop writing classes" from PyCon <http://www.youtube.com/watch?v=o9pEzgHorH0>

Rob Pike's reflection on using data vs OO:

[https://plus.google.com/101960720994009339267/posts/hoJdanih...](https://plus.google.com/101960720994009339267/posts/hoJdanihKwb)

~~~
seanmcdirmid
I don't really get it. From the blog post, Granger is describing a very
imperative object system, but you seem to be claiming that it is somehow a
value-oriented functional system. What am I missing?

~~~
krosaen
That's a good question - there are still objects, or groupings of data, and
even tags that identify which behaviors or functions apply to them in various
contexts. What strikes me as different is the entire hierarchy is a nested
data structure that is easy to reason about and modify at runtime without the
use of a fancy debugger. The use of 'encapsulation' that hid the underlying
data in each grouping and also bind functions / methods directly to each
object in this case would only make it harder to work with. Why? Because to
view, construct, augment at runtime or serialize the hierarchy would require
constructors, serializers, deserializers etc, instead of having something that
is just a data structure, ready to be viewed, put on a queue, sent over the
wire etc.

The idea of 'behaviors' also provides flexibility in what functions can act on
any grouping of data - the key value pairs needn't be associated with a
'class' which dictates what the associated functions will be adds more
flexibility. As the author hints, there are other ways of achieving this
agility - dynamic mixins.

Finally, while having a well defined protocol (or API or endpoint or whatever
you want to call it) is valuable and helps organize code, I think taking this
idea to the extreme and saying that every single object or piece of data you
pass around as arguments or return values from these end points needs to be
expressed as an abstract protocol itself is where you really start to lose. An
expected format of the data structures of the arguments and return values can
and should be part of an API, but needing to wrap them in objects doesn't
really help - and that's where this trend I speak of begins to seem like
progress to me.

~~~
seanmcdirmid
This is quite the standard dynamic languages argument, I'm not seeing much new
here but this wasn't meant to be new. However, my point is that this is
heavily object-oriented and heavily imperative. It is just a different kind of
object system that people might not be used to. If this ever catches on, we'll
just need another Treaty of Orlando to re-harmonize the object community.

To be honest, a lot of the object system seems to be confused and muddled. I
mean, there are a million different ways to get the kind of flexibility in
what the author call "behaviors" (a term incredibly overloaded, BTW),
protocols, traits, type classes, etc...dynamic mixins are nothing new here
also (I've designed a couple of research languages with dynamic mixin
inheritance, fun stuff).

As an academic, I want to see reasons why X was chosen and comparisions with
previous systems that chose Y instead; but I know I won't get this from most
of the people who do the interesting work in this field, so I have to figure
it out for myself. Calling this an object system opens up a floodgate of
related systems that I can compare it against.

------
lispm
Behaviors? Methods.

Tags? Classes.

Just data? In Lisp the code, the functions, the symbols, etc. already are
data.

Sure one can build another dynamic object system and use fancy names for it.
It is really difficult to count the number of times this had been done in the
past.

A typical Lisp-based system which does that was Object Lisp:

    
    
       http://lispm.dyndns.org/docs/ObjectLisp-Manual.pdf
    

Used by LMI for their user interface and by Coral for Macintosh Common Lisp
(until Object Lisp was replaced by CLOS).

The idea that a) one can add and remove behaviors (methods) and b) add and
remove tags (classes / mixins / ...) at runtime is found in almost every Lisp-
based object system. Operations for that are available in every system.

Lisp-based object systems are not that much a language, but more a collection
of objects and classes which interact in a defined way. Called a 'system'.
That's also we have the Common Lisp Object SYSTEM, and not the Common Lisp
Object Language. CLOS is described in terms of objects and their interactions
(classes and generic functions are objects themselves) - not so much as a
grammar of language constructs.

~~~
seanmcdirmid
This isn't very fair. Yes, Granger is describing another kind of object system
and yes it is OOP. And yes, this is in the long tradition of CLOS. But all of
these sytems are fairly unique and deserves its own description, in this case
reactivity is extremely important.

~~~
lispm
what is unique about it?

~~~
seanmcdirmid
As mentioned, the reactivity makes this interesting to me. The amount of
reactivity required for a project like LightTable is pretty ambitious, which
we don't really see in most Lisp object systems (or any object systems for
that matter).

~~~
lispm
What kind of reactivity does he describe in this article?

~~~
seanmcdirmid
~70% of his article is discussing reactivity:

> So what is the thing we want to compose if it's not the state? It's the
> reactions that objects have when events flow across the system. What tends
> to define variation in evented applications isn't the state of objects
> themselves, but how they react to certain messages. Given that, we want to
> be able to compose these reactions to create new versions of the same
> object. ... Behaviors are a way of creating bite-sized, reusable reactions
> to messages.

Then a comparison with the conventional approach:

> The way events are traditionally done in most modern platforms, you end up
> with hidden collections of listeners that contain nameless functions. This
> hides a vital bit of information - how do you know what's going to happen at
> runtime and how would you change that? ... By comparison, behaviors carry
> more information and are bound at call time. We can freely modify their
> reactions at runtime just by replacing the map in the LT data structure. And
> if we ever want to know what an object is going to do, we just take a look
> at that object's set of bound behaviors.

Finally, he talks about tagging, which might seem unrelated, but then I also
adopted a similar in my own reactive object system [1] for simlar reasons: I
needed a way to externally abstract over objects to get them to behave in
similar ways (and ya, I did dynamic mixins too, this is a fun field).

[1]
[http://research.microsoft.com/apps/pubs/default.aspx?id=1793...](http://research.microsoft.com/apps/pubs/default.aspx?id=179365)

~~~
lispm
That does not sound 'unique'. reusable actions to messages in Flavors are
called methods and the reuse is done via Mixins and Method Combinations.

In Lisp you won't use 'nameless' functions. Typically one uses symbols, which
are late bound to functions. Thus the symbol table is the map we can modify.
In a meta-object-based object system like CLOS, generic functions are mapped
to symbols and they have 'maps' of methods which will be combined at runtime.

The Xerox PARC researchers who defined CLOS worked a lot on the problems
(mixins, active values, open implementations, meta-objects in software, ...).

~~~
seanmcdirmid
Nothing is truly new under the sun. Perhaps I should have used the word novel
and interesting rather than unique. You've also reminded me of Gabriel's most
recent Onward essay:

<http://www.dreamsongs.com/Files/Incommensurability.pdf>

I think Gabriel misidentifies this as a delta between science and engineering,
but I think its more about the diff between communities. That we use different
terminology, different sources, we have completely different underlying
cultures. And you have to admit, the CLOS community is pretty unique, anyone
looking from the outside is bound to have a very different frame of reference.

I've gone through as much CLOS work as I could, mixins inspired by an ice
cream shop, active values (which are really just properties with custom
get/set methods today) can serve as a nucleus for reactive programming but are
not very sophisticated in managing change propagation (I think FRP signals are
better, but you can clearly see the relationships). There is a lot there, but
its not the end all of everything, their is some honest innovation going on
here.

------
metajack
This was an excellent talk at the Conj, and the talk was the best explanations
of entity systems I've yet found.

I also recommend his Anatomy of a Knockout which also covers more of this
topic:

[http://www.chris-granger.com/2012/12/11/anatomy-of-a-
knockou...](http://www.chris-granger.com/2012/12/11/anatomy-of-a-knockout/)

with discussion:

<http://news.ycombinator.com/item?id=4907791>

------
endlessvoid94
Code editor, modifiable at runtime, built on a dialect of Lisp...

;-)

I'm a fan of Light Table. Just had to point it out.

~~~
felideon
Sounds like you're looking for Emacs:

<http://www.gnu.org/software/emacs/>

~~~
endlessvoid94
That was the joke :-P

------
seanmcdirmid
I would love to see more context about the architecture of the system; e.g.
how are the objects organized and what are they reacting to? I've seen many
reactive object systems and I can't easily place this one in context, but
maybe I'm spoiled by academic publishing (where we understand a new system in
relation to older systems).

It seems like this object system is related to Rodney Brooks' behavior-based
subsumption architecture (used mostly in robotics and games) or not? It uses
much of the same terminology, but doesn't actually have subsumption (as
conflict resolution). Strange. It also seems to be very imperative, in the
sense that behaviors and behavior/object relationships seem to be managed very
discretely as opposed to using any kind of declarative continous abstraction
(say data binding or guarded connections). This bothers me a bit, as I have
found these systems to be very unmanageable as they scale. But I still like
where this is going, I can't wait to learn more!

------
agumonkey
I've screamed this elsewhere already, it feels like smalltalk is back, and
maybe even GeneraOS ?

~~~
seanmcdirmid
This is much better than smalltalk. In Smalltalk, you could just edit code on
the fly, but nothing would happen if that code wasn't continuously executing
in an infinite loop. LightTable, on the other hand, supports real liveness.
Its something that we never got in Smalltalk or Lisp.

Edit: Morphic (from Self) does support liveness, but only when it is being
edited at the graphical level. If you are editing code, you are stuck in the
same refresh trap as Smalltalk or Lisp, but to be fair, morphs are
continuously updating themselves in infinite loops so it doesn't matter :)

~~~
agumonkey
Fine, I forgot Self, forgive me :p

That's what I'm waiting for anyway, lazy reactive systems

------
martinced
First let me say that I really don't want to be mean here: I'll try to do
constructive criticism.

I've worked professionally in the video-game industry and I must say that I'm
a bit disappointed by all these blogs / videos trying to promote tools and/or
languages by using very _poorly_ conceived games.

Take ChromaShift for example: it's terribly bad. We're in 2013 and the thing
draws less "pixels" and yet is slower than any 2D scroller from the 80s
running on a C64.

Sure, it was done in 48 hours and kudos for that... But back in the nineties
we'd enter 48 hours or 24 hours "demo" competition and come up with stuff
incredibly better looking than that.

The author of ChromaShift made a talk about the "Component Entity System" he's
using.

And that's where the "suspension of disbelief" simply doesn't work for me:
everytime he shows the game, it's freakin' ugly and slow as molasses.

Then comes the disappointment: if you want to run at a reasonable framerate
you, of course, need to perform quite some logic in about 16 milliseconds (the
author correctly mentions it in its talk).

And... Drumrolls: simply "iterating" over a seq made of a hundred elements in
Clojure apparently takes more time than that. Not to mention that if you
create objects at every frame then at one point the GC shall have to kick in
and your real-time performance will suddenly drop (the author says that in the
talk I watched about CES/Chromashift).

But why oh why use a language that is very arguable not meant to write video
games in (Clojure) to... write a game in it as an example of what can be
done!? (or as an example of how powerful an IDE supposedly is!?)

The "problem" is that they're talking about that "CES" as it was the one way
to create games. But how many games written in Clojure have been shipped?
Heck, how many games written in Clojure are even _playable_?

There are, today, lots of people writing games (either professionals or
amateurs) and selling them (or giving them away for free).

And not a single one of these game is using that "Component Entity System".
And hardly any of them is using Clojure (there may be one or two exceptions
but that's about it).

And every single of these real games (even the simplest smartphone game) is
looking better and is more responsive than the poor examples we see in Clojure
talks.

So how are we, Clojure believers (I'm a big Clojure fan and investing lots of
time and energy learning it), supposed to take any of this seriously?

Once again, I don't want to be mean: these guys are typically good programmers
but, to be honest, they don't know _anything_ about game programming and that
is a bit saddening.

EDIT: I won't even start talking about the "quality" of the "text editor" part
of Light Table. I _like_ the concept of LT but I do wan't a client/server mode
where I can plug in vim or Emacs (or Sublime Text 2 or whatever) as my text
editor. Otherwise I'm never going to take LT seriously.

~~~
ibdknox
It's a bit pointless to talk about the performance of a game built for a 48
hour coding competition, but since you brought it up...

1) What are you basing your judgement of perfomance on? Not the HD recorded
video that had two of my cores pegged the whole time I hope - because that's
not at all representative. ChromaShift runs at 60fps on every piece of
hardware I own (including a macbook from about 4 years ago) and generally runs
at < 2ms per frame. There's a bug for some chipsets that causes chrome and FF
to render canvas very slowly when hardware acceleration is on - if you're
actually running the game, trying turning it off.

2) Even if it were slow, the comment you're making would be about JavaScript
and Canvas performance not CLJS. The performance intensive bits of the game
all delegate to JS, just like the performance intensive bits of any Clojure
app are typically wrapped Java.

    
    
      > But back in the nineties we'd enter 48 hours or 24 hours "demo" competition and come up with stuff incredibly better looking than that.
    

None of us are game designers or builders. I'm sorry it didn't live up to a
professional's expectations for a 48 hour game.

    
    
       > And... Drumrolls: simply "iterating" over a seq made of a hundred elements in Clojure apparently takes more time than that. 
    

It's tens of thousands of elements and seqs aren't meant to be fast. You can
easily just use an array when performance is critical.

    
    
       > But why oh why use a language that is very arguable not meant to write video games in (Clojure) to... write a game in it as an example of what can be done!? (or as an example of how powerful an IDE supposedly is!?)
    

The game was never used as an example of how powerful the IDE is. The game is
just a game. The concepts behind the game happen to be similar to how LT was
designed. Do you have any issues with the actual design of LT? This post has
nothing to do with ChromaShift.

    
    
       > And not a single one of these game is using that "Component Entity System".
    

No one must build games on top of Unity and all the examples going back to the
early 00's (like Dungeon Siege) must be a figment of my imagination. Entity
systems are not new by any stretch of the imagination. [1]

    
    
       > So how are we, Clojure believers (I'm a big Clojure fan and investing lots of time and energy learning it), supposed to take any of this seriously?
    

Clojure(Script) is very fast, but it's not C and won't ever be. I've never run
into a performance issue that I simply couldn't drop down to the platform and
then wrap nicely to get native JVM or JS performance.

[1]: <http://scottbilas.com/games/dungeon-siege>

BTW, constructive criticism doesn't typically use words like "terrible" :)

~~~
snprbob86
> games built on top of Unity

For the curious, here is Unity's components reference page:

<http://docs.unity3d.com/Documentation/Components/index.html>

~~~
Impossible
There are shipping AAA games and game engines that use some variation of a
component system for entities, including Unreal Engine 3 and CryEngine.

In practice, "component entity system" simply means favor composition over
inheritance. There are different ways you can architect your code to fit the
definition, some of them efficient and cache friendly.

~~~
Tuna-Fish
And to elaborate more: The reason for this shift is that the current-gen
consoles (xbox360 and PS3) are really, really bad at random access into
memory.

Chasing pointers on PC is bad, chasing pointers on them is suicidal. With
them, it's better to do a linear copy of ~3kb than it is to reference one more
uncached pointer. And this is especially bad because they don't have any kind
of decent prefetchers -- if you haven't actually touched an object, it's
probably uncached.

Becaus of this, if you use traditional OoO to structure your code, you leave
an order of magnitude or more of performance on the table. And since the CPUs
are slow as it is, you can't afford to do this.

