Hacker News new | past | comments | ask | show | jobs | submit login
The IDE as a value (chris-granger.com)
116 points by mcrittenden on Jan 25, 2013 | hide | past | favorite | 59 comments



I like this trend towards values instead of objects, Rich's talk is really worth watching too http://www.infoq.com/presentations/Value-Values. Using this approach in UIs is stretching the idea even further into a realm where I previously thought OO had a sweet spot.

I still like to convince myself, though, as to why this approach makes sense over the traditional OO / encapsulation approach. My current line of reasoning is that while having well defined services and interfaces can help organize a software system, the actual arguments and return values of these end points are really better thought of as values - and the expected structure of these values can certainly be part of the API. You just don't force people to construct and destruct objects for everything you are passing around.

A couple of related resources on this topic:

"Stop writing classes" from PyCon http://www.youtube.com/watch?v=o9pEzgHorH0

Rob Pike's reflection on using data vs OO:

https://plus.google.com/101960720994009339267/posts/hoJdanih...


> I still like to convince myself, though, as to why this approach makes sense over the traditional OO / encapsulation approach.

I've become a 100% believer that Values are preferable to Objects & Encapsulation, full stop. However, I don't think that means you abandon objects, state, encapsulation 100%. I think that just means that you minimize them significantly.

I just watched a talk by Stuart Halloway [1] on Datomic where he makes a little "play at home" fill in the blanks table of design decisions and implications. He makes the assertion that if you take "program with values" to be the only given filled in table cell, you can make some arbitrary decisions for one or two other spots and the rest of the spots will fall out trivially with interesting properties. I guess the point is that programming with Values just affords you so much freedom and flexibility.

[1] http://www.infoq.com/presentations/Impedance-Mismatch


I agree with the trend, and also with the sentiment around interfaces.

Mutable/encapsulated approaches(which don't end at OOP, but rather continue into language design) act to define protocols. Protocols have a necessarily stateful nature; in the best case, the state is a single value at a single time, but in many real-world situations, state change events are frequent and need careful attention. Source code itself tends to need hundreds of changes to reach application goals, regardless of the language.

Functional and immutable style, on the other hand, acts on processes that are inherently computative, and this is the "nitty gritty" of most business logic. Even if the system is designed in an imperative fashion, it can bolster itself through a combination of a few "key methods" that apply functional style, and a type system that imposes a substantial amount of immutability.

The tricky part with OOP as we know it is to recognize when you're making a protocol, and when you're making an immutable computation. Many of the OO type systems used in industry lack the expressiveness to distinguish the two, and rush the natural life cycle of the architecture by imposing classes too early.


I don't really get it. From the blog post, Granger is describing a very imperative object system, but you seem to be claiming that it is somehow a value-oriented functional system. What am I missing?


That's a good question - there are still objects, or groupings of data, and even tags that identify which behaviors or functions apply to them in various contexts. What strikes me as different is the entire hierarchy is a nested data structure that is easy to reason about and modify at runtime without the use of a fancy debugger. The use of 'encapsulation' that hid the underlying data in each grouping and also bind functions / methods directly to each object in this case would only make it harder to work with. Why? Because to view, construct, augment at runtime or serialize the hierarchy would require constructors, serializers, deserializers etc, instead of having something that is just a data structure, ready to be viewed, put on a queue, sent over the wire etc.

The idea of 'behaviors' also provides flexibility in what functions can act on any grouping of data - the key value pairs needn't be associated with a 'class' which dictates what the associated functions will be adds more flexibility. As the author hints, there are other ways of achieving this agility - dynamic mixins.

Finally, while having a well defined protocol (or API or endpoint or whatever you want to call it) is valuable and helps organize code, I think taking this idea to the extreme and saying that every single object or piece of data you pass around as arguments or return values from these end points needs to be expressed as an abstract protocol itself is where you really start to lose. An expected format of the data structures of the arguments and return values can and should be part of an API, but needing to wrap them in objects doesn't really help - and that's where this trend I speak of begins to seem like progress to me.


This is quite the standard dynamic languages argument, I'm not seeing much new here but this wasn't meant to be new. However, my point is that this is heavily object-oriented and heavily imperative. It is just a different kind of object system that people might not be used to. If this ever catches on, we'll just need another Treaty of Orlando to re-harmonize the object community.

To be honest, a lot of the object system seems to be confused and muddled. I mean, there are a million different ways to get the kind of flexibility in what the author call "behaviors" (a term incredibly overloaded, BTW), protocols, traits, type classes, etc...dynamic mixins are nothing new here also (I've designed a couple of research languages with dynamic mixin inheritance, fun stuff).

As an academic, I want to see reasons why X was chosen and comparisions with previous systems that chose Y instead; but I know I won't get this from most of the people who do the interesting work in this field, so I have to figure it out for myself. Calling this an object system opens up a floodgate of related systems that I can compare it against.


Behaviors? Methods.

Tags? Classes.

Just data? In Lisp the code, the functions, the symbols, etc. already are data.

Sure one can build another dynamic object system and use fancy names for it. It is really difficult to count the number of times this had been done in the past.

A typical Lisp-based system which does that was Object Lisp:

   http://lispm.dyndns.org/docs/ObjectLisp-Manual.pdf
Used by LMI for their user interface and by Coral for Macintosh Common Lisp (until Object Lisp was replaced by CLOS).

The idea that a) one can add and remove behaviors (methods) and b) add and remove tags (classes / mixins / ...) at runtime is found in almost every Lisp-based object system. Operations for that are available in every system.

Lisp-based object systems are not that much a language, but more a collection of objects and classes which interact in a defined way. Called a 'system'. That's also we have the Common Lisp Object SYSTEM, and not the Common Lisp Object Language. CLOS is described in terms of objects and their interactions (classes and generic functions are objects themselves) - not so much as a grammar of language constructs.


This isn't very fair. Yes, Granger is describing another kind of object system and yes it is OOP. And yes, this is in the long tradition of CLOS. But all of these sytems are fairly unique and deserves its own description, in this case reactivity is extremely important.


what is unique about it?


As mentioned, the reactivity makes this interesting to me. The amount of reactivity required for a project like LightTable is pretty ambitious, which we don't really see in most Lisp object systems (or any object systems for that matter).


What kind of reactivity does he describe in this article?


~70% of his article is discussing reactivity:

> So what is the thing we want to compose if it's not the state? It's the reactions that objects have when events flow across the system. What tends to define variation in evented applications isn't the state of objects themselves, but how they react to certain messages. Given that, we want to be able to compose these reactions to create new versions of the same object. ... Behaviors are a way of creating bite-sized, reusable reactions to messages.

Then a comparison with the conventional approach:

> The way events are traditionally done in most modern platforms, you end up with hidden collections of listeners that contain nameless functions. This hides a vital bit of information - how do you know what's going to happen at runtime and how would you change that? ... By comparison, behaviors carry more information and are bound at call time. We can freely modify their reactions at runtime just by replacing the map in the LT data structure. And if we ever want to know what an object is going to do, we just take a look at that object's set of bound behaviors.

Finally, he talks about tagging, which might seem unrelated, but then I also adopted a similar in my own reactive object system [1] for simlar reasons: I needed a way to externally abstract over objects to get them to behave in similar ways (and ya, I did dynamic mixins too, this is a fun field).

[1] http://research.microsoft.com/apps/pubs/default.aspx?id=1793...


That does not sound 'unique'. reusable actions to messages in Flavors are called methods and the reuse is done via Mixins and Method Combinations.

In Lisp you won't use 'nameless' functions. Typically one uses symbols, which are late bound to functions. Thus the symbol table is the map we can modify. In a meta-object-based object system like CLOS, generic functions are mapped to symbols and they have 'maps' of methods which will be combined at runtime.

The Xerox PARC researchers who defined CLOS worked a lot on the problems (mixins, active values, open implementations, meta-objects in software, ...).


Nothing is truly new under the sun. Perhaps I should have used the word novel and interesting rather than unique. You've also reminded me of Gabriel's most recent Onward essay:

http://www.dreamsongs.com/Files/Incommensurability.pdf

I think Gabriel misidentifies this as a delta between science and engineering, but I think its more about the diff between communities. That we use different terminology, different sources, we have completely different underlying cultures. And you have to admit, the CLOS community is pretty unique, anyone looking from the outside is bound to have a very different frame of reference.

I've gone through as much CLOS work as I could, mixins inspired by an ice cream shop, active values (which are really just properties with custom get/set methods today) can serve as a nucleus for reactive programming but are not very sophisticated in managing change propagation (I think FRP signals are better, but you can clearly see the relationships). There is a lot there, but its not the end all of everything, their is some honest innovation going on here.


I had the exact same reaction. It is unfortunate Chris didn't take time to respond to your comment...

It is always interesting to see different object systems being used. I just hope we will get a honest review of it in a few months after it has been used to develop a large application like Light Table.


It's nothing wrong with using something like that, but it's hardly novel in any way. I have no doubt that one can develop a larger application with it...


This was an excellent talk at the Conj, and the talk was the best explanations of entity systems I've yet found.

I also recommend his Anatomy of a Knockout which also covers more of this topic:

http://www.chris-granger.com/2012/12/11/anatomy-of-a-knockou...

with discussion:

http://news.ycombinator.com/item?id=4907791


Code editor, modifiable at runtime, built on a dialect of Lisp...

;-)

I'm a fan of Light Table. Just had to point it out.


Light Table is Emacs for the 21st century. Emacs lacks reactivity and doesn't support liveness (you have to manually refresh).


Sounds like you're looking for Emacs:

http://www.gnu.org/software/emacs/


That was the joke :-P


I would love to see more context about the architecture of the system; e.g. how are the objects organized and what are they reacting to? I've seen many reactive object systems and I can't easily place this one in context, but maybe I'm spoiled by academic publishing (where we understand a new system in relation to older systems).

It seems like this object system is related to Rodney Brooks' behavior-based subsumption architecture (used mostly in robotics and games) or not? It uses much of the same terminology, but doesn't actually have subsumption (as conflict resolution). Strange. It also seems to be very imperative, in the sense that behaviors and behavior/object relationships seem to be managed very discretely as opposed to using any kind of declarative continous abstraction (say data binding or guarded connections). This bothers me a bit, as I have found these systems to be very unmanageable as they scale. But I still like where this is going, I can't wait to learn more!


I've screamed this elsewhere already, it feels like smalltalk is back, and maybe even GeneraOS ?


This is much better than smalltalk. In Smalltalk, you could just edit code on the fly, but nothing would happen if that code wasn't continuously executing in an infinite loop. LightTable, on the other hand, supports real liveness. Its something that we never got in Smalltalk or Lisp.

Edit: Morphic (from Self) does support liveness, but only when it is being edited at the graphical level. If you are editing code, you are stuck in the same refresh trap as Smalltalk or Lisp, but to be fair, morphs are continuously updating themselves in infinite loops so it doesn't matter :)


Fine, I forgot Self, forgive me :p

That's what I'm waiting for anyway, lazy reactive systems


I'm sad that this comment is the last on this page and not the first. I imagine that - after LT is complete - some Smalltalkers will hold a competition to replicate it in 48 hours. I believe they would succeed - building on top of Smalltalk environment with Morphic[1] this shouldn't be too hard.

For those who don't know why: go grab Pharo[2] Smalltalk and "Pharo By Example"[3] book and play with it. It's absolutely beautiful, an experience well worth a few days of time.

[1] As already noted Morphic was first implemented for Self and is not a neccessary part of any Smalltalk. [2] http://www.pharo-project.org/home [3] http://pharobyexample.org/


You work with Pharo ?

I've a strong feeling that it's not a language matter anymore. People behind Smalltalk and Self are still active. Idioms are spread everywhere and such ideas will re-emerge.


I hope it is a scream of "FINALLY!!".


First let me say that I really don't want to be mean here: I'll try to do constructive criticism.

I've worked professionally in the video-game industry and I must say that I'm a bit disappointed by all these blogs / videos trying to promote tools and/or languages by using very poorly conceived games.

Take ChromaShift for example: it's terribly bad. We're in 2013 and the thing draws less "pixels" and yet is slower than any 2D scroller from the 80s running on a C64.

Sure, it was done in 48 hours and kudos for that... But back in the nineties we'd enter 48 hours or 24 hours "demo" competition and come up with stuff incredibly better looking than that.

The author of ChromaShift made a talk about the "Component Entity System" he's using.

And that's where the "suspension of disbelief" simply doesn't work for me: everytime he shows the game, it's freakin' ugly and slow as molasses.

Then comes the disappointment: if you want to run at a reasonable framerate you, of course, need to perform quite some logic in about 16 milliseconds (the author correctly mentions it in its talk).

And... Drumrolls: simply "iterating" over a seq made of a hundred elements in Clojure apparently takes more time than that. Not to mention that if you create objects at every frame then at one point the GC shall have to kick in and your real-time performance will suddenly drop (the author says that in the talk I watched about CES/Chromashift).

But why oh why use a language that is very arguable not meant to write video games in (Clojure) to... write a game in it as an example of what can be done!? (or as an example of how powerful an IDE supposedly is!?)

The "problem" is that they're talking about that "CES" as it was the one way to create games. But how many games written in Clojure have been shipped? Heck, how many games written in Clojure are even playable?

There are, today, lots of people writing games (either professionals or amateurs) and selling them (or giving them away for free).

And not a single one of these game is using that "Component Entity System". And hardly any of them is using Clojure (there may be one or two exceptions but that's about it).

And every single of these real games (even the simplest smartphone game) is looking better and is more responsive than the poor examples we see in Clojure talks.

So how are we, Clojure believers (I'm a big Clojure fan and investing lots of time and energy learning it), supposed to take any of this seriously?

Once again, I don't want to be mean: these guys are typically good programmers but, to be honest, they don't know anything about game programming and that is a bit saddening.

EDIT: I won't even start talking about the "quality" of the "text editor" part of Light Table. I like the concept of LT but I do wan't a client/server mode where I can plug in vim or Emacs (or Sublime Text 2 or whatever) as my text editor. Otherwise I'm never going to take LT seriously.


> But why oh why use a language that is very arguable not meant to write video games

Because Chris has absolutely no interest in writing a game (as an end goal). He's trying to write an IDE. He wrote a game to test out some ideas that he thought had direct applicability to his design for an IDE. Turns out that it helped him develop some insights into how to build dynamic systems with immutable data structures and composable, aspect-oriented behavior injection. I think that means it was a successful experiment; he wants to share his learning process.

No one is arguing that Clojure is the right tool for the job when it comes to real-time games, so I have no idea why you are so upset...


I'm a little confused as to your assertion about Clojure and iterating over a 100 element sequence.

On my machine, at least (map identity (range 0 99)) takes 0.046072 milliseconds, which is somewhat less than 16 milliseconds. Now, maybe there's something I'm missing here, can you clarify the source of your assertion?

Again, definitely not trying to be confrontational, and I don't know clojure very well, nor do I know anything about games programming, but that was what I found on my i3 core with nrepl running in emacs 24.

On topic, I think that the architecture proposed sounds really interesting (as does much of Light Table).


To be fair, "(time (map identity (range 0 99)))" -> 0.037 msecs, "(time (doall (map identity (range 0 99))))" -> 0.13 msecs. :)


For people who don't know Clojure, the reason this is a critical difference is because Clojure's map function is "lazy", in that it simply instantaneously returns an object which, if you attempt to iterate on it, only then attempts to execute the code involved.

To make this more concrete, if I take the former expression, and make it go to 1000, or 10000, or 10000000000000, it seriously takes the same amount of time on my system. By passing the result to doall you are forcing the entire seq to be iterated and executed.


Ah, I had a feeling I was missing something. I probably should have realised laziness had something to do with it. I'll have to keep that in mind in the future. Thanks for the information, its greatly appreciated.


You're talking about CES like it's a Clojure concept:

> The "problem" is that they're talking about that "CES" as it was the one way to create games. But how many games written in Clojure have been shipped? Heck, how many games written in Clojure are even playable?

CES has been around for a long time and was originally created as an OOP design pattern to get around the inflexibility of the traditional Actor model in OOP. It's not a Lisp design pattern. Don't take Clojure's (alleged) runtime performance shortcomings and tack it on to CES so you can claim it's a shoddy design pattern.

The performance of a CES implementation is reliant on just that; the implementation. And CES systems can be quite performant; I've used CES-derived approaches in design of high-fidelity simulation environments and it's served me just fine.


To be honest, I think you're completely missing the point. The game, Chromashift, is not a serious attempt to make a quality game. It was an opportunity to try building a (somewhat) complex system with the IDE they've been working on. Along the way, they took some inspiration from CES, and brought it back into LightTable. And yes, CES can be used to create games with relatively good performance. Take a look at Replica Island [1], an open-source 2D side scroller for Android that runs well on the low-powered Android devices that came out 2-3 years ago.

Anyways, the main point: what does the fact that Clojure isn't a great language for building games have to do with this post?

[1]: http://replicaisland.blogspot.com/


+1. I faced this writing software in Python that was meant to be fast and interactive and collected data about just how few operations you can do in 16 ms. A blog post containing timing information for common operations in Python is available at http://blog.enthought.com/general/what-is-your-python-budget... . (Disclaimer: I am the author of the blog post).

Some things surprised me when I first did the timing -

1) Turns out that a function call plus an empty loop over a million items is ~16 ms in Python.

2) Turns out that on a pretty high end machine you get about 174k function calls in 16 ms. On a Core 2 duo laptop you get about 80k function calls in 16 ms. (In both cases the function takes no parameters and has a no-op body)

For getting timing information on your own machine you can run the code from https://github.com/deepankarsharma/cost-of-python


If you were planning to write a high performance game using python, most likely you wouldn't be using the vanilla cpython interpreter, but instead using cython (http://cython.org/) to write extensions in essentially C for all your high performance code.

I think it's important to remember one of the core tenants of python is to first write in python, then optimize the bits where necessary in C by moving those calls into an extension - by using cython you get to move to C like speeds by just annotating your existing python code.

Also - I think a fairly more common approach to using python in game development is to write the core in C++, then call out to python for scriptability purposes (e.g. configuring characters/levels) - rarely would one write a full game in python unless extensions were heavily used, for the reasons quoted above.


I agree with everything you say but continually find cases where things have to be rewritten in Cython / using the CPython C-api or moving core logic to numpy. Knowing that you cant be interactive if your solution requires

a) allocating more than 160k objects

b) creating more than 22k numpy arrays

c) entering a with context more than 3700 times

d) doing a single for loop of over a million elements

can save you time from the get go. I am not suggesting that you cant write fast interactive code in Python, I am saying you wont write fast code by accident.


> And... Drumrolls: simply "iterating" over a seq made of a hundred elements in Clojure apparently takes more time than that.

To put this into perspective, in Supreme Commander you have thousands of units, shooting tens of thousands of bullets/missiles/shells each of which is being simulated by a physics engine with accurate collision detection that actually takes the shape of all the models into account (so if an airplane hole in it (http://www.supremecommander-alliance.com/uploads/RTEmagicC_B...) then a bullet coming at the hole will fly through the hole.

Video example: https://www.youtube.com/watch?list=PLwzh5b6Jwbq8xFe0TCW4BSA9...

ClojureScript is great but if you follow the idioms then you'd likely have problems testing a single bullet for collision against a single model, let alone tens out thousands against thousands (and building the spatial data structures to support that). And then remember that this is just part of what needs to happen each frame, you also need to update the trajectories and locations of every object, render the world to the screen, synchronize the actions of each player with each other player, compute the sound based on the location of the camera, etc.


Iterating over 100 items doesn't take 16ms. I'm not sure where he got that figure from.

But your example is a bit exaggerated. I guarantee you that an RTS like Supreme Commander doesn't update the physics simulation or game logic at 60 frames per second. It's probably more like 10 FPS with the rendering engine interpolating objects at 60 FPS.


I can corroborate this: in the large video game project I worked for once, we definitely had a different notion of "physics frames" vs. "render frames", and they updated at different rates. If nothing else, the graphics system is not necessarily deterministic in speed, and missed frames should just get dropped, but a physics simulation is often something where if you skip a frame you might miss a collision.


It's true that simulation speed is independent of graphics speed in Supreme Commander. The game lets you adjust the simulation speed, although of course if there are too many things happening your CPU can't keep up and it slows back down. If you have a fast computer or there are not many things happening you can run the simulation at 100 ticks per second if you want (although it will be hard to play because everything goes too fast).

That figure comes from the presentation linked to by this post, but I'm not sure where it came from before that. In any case the team had performance problems for a comparatively trivial amount of processing work. Looking at the game you could easily run that game logic at 10000 updates per second when you implement it in C/C++.


"I don't want to be mean here"

"ChromaShift is terribly bad"

"it's freakin' ugly"

"finger quotes quality of the finger quotes text editor"

You certainly sound like someone who has an open mind and is respectfully putting forth constructive criticism!


"I don't want to be mean."

Is mean.

Classic.


How can one say: And not a single one of these game is using that "Component Entity System".

That seems like an impossible statement without deep knowledge of the inner workings of the popular games you are referring to, which you certainly don't have access to.

Also, CES is a fairly longstanding design pattern for game development (at least 10+ years) - I refer you to this comment: http://news.ycombinator.com/item?id=4907998


It's a bit pointless to talk about the performance of a game built for a 48 hour coding competition, but since you brought it up...

1) What are you basing your judgement of perfomance on? Not the HD recorded video that had two of my cores pegged the whole time I hope - because that's not at all representative. ChromaShift runs at 60fps on every piece of hardware I own (including a macbook from about 4 years ago) and generally runs at < 2ms per frame. There's a bug for some chipsets that causes chrome and FF to render canvas very slowly when hardware acceleration is on - if you're actually running the game, trying turning it off.

2) Even if it were slow, the comment you're making would be about JavaScript and Canvas performance not CLJS. The performance intensive bits of the game all delegate to JS, just like the performance intensive bits of any Clojure app are typically wrapped Java.

  > But back in the nineties we'd enter 48 hours or 24 hours "demo" competition and come up with stuff incredibly better looking than that.
None of us are game designers or builders. I'm sorry it didn't live up to a professional's expectations for a 48 hour game.

   > And... Drumrolls: simply "iterating" over a seq made of a hundred elements in Clojure apparently takes more time than that. 
It's tens of thousands of elements and seqs aren't meant to be fast. You can easily just use an array when performance is critical.

   > But why oh why use a language that is very arguable not meant to write video games in (Clojure) to... write a game in it as an example of what can be done!? (or as an example of how powerful an IDE supposedly is!?)
The game was never used as an example of how powerful the IDE is. The game is just a game. The concepts behind the game happen to be similar to how LT was designed. Do you have any issues with the actual design of LT? This post has nothing to do with ChromaShift.

   > And not a single one of these game is using that "Component Entity System".
No one must build games on top of Unity and all the examples going back to the early 00's (like Dungeon Siege) must be a figment of my imagination. Entity systems are not new by any stretch of the imagination. [1]

   > So how are we, Clojure believers (I'm a big Clojure fan and investing lots of time and energy learning it), supposed to take any of this seriously?
Clojure(Script) is very fast, but it's not C and won't ever be. I've never run into a performance issue that I simply couldn't drop down to the platform and then wrap nicely to get native JVM or JS performance.

[1]: http://scottbilas.com/games/dungeon-siege

BTW, constructive criticism doesn't typically use words like "terrible" :)


> games built on top of Unity

For the curious, here is Unity's components reference page:

http://docs.unity3d.com/Documentation/Components/index.html


Though, if anyone's actually interested in coding a game in this style in Unity3D, I would heavily recommend against using Unity Components directly if you're at all concerned about performance and code clarity.

Instead, have one MonoBehavior and a base properties class which you can delegate your events to (usually only a half dozen). That way you're not relying on undefined behavior on the Unity side, and you can, eg, new and clone properties without issue.

I use something similar for Under the Ocean (http://www.underthegarden.com), which has to deal in worst case with hundreds of objects with potentially tens of behaviors that cannot be known ahead of time.

The system works damn well - for my type of game, perhaps it's the only way to do it. But performance is forever a concern. Monobehaviors are nice (eg, you can edit its variables in the editor), but they're heavy weight.


There are shipping AAA games and game engines that use some variation of a component system for entities, including Unreal Engine 3 and CryEngine.

In practice, "component entity system" simply means favor composition over inheritance. There are different ways you can architect your code to fit the definition, some of them efficient and cache friendly.


And to elaborate more: The reason for this shift is that the current-gen consoles (xbox360 and PS3) are really, really bad at random access into memory.

Chasing pointers on PC is bad, chasing pointers on them is suicidal. With them, it's better to do a linear copy of ~3kb than it is to reference one more uncached pointer. And this is especially bad because they don't have any kind of decent prefetchers -- if you haven't actually touched an object, it's probably uncached.

Becaus of this, if you use traditional OoO to structure your code, you leave an order of magnitude or more of performance on the table. And since the CPUs are slow as it is, you can't afford to do this.


Just to add to this, the Cocos2d team has been advising developers to use the Javascript bindings since the last half of last year. The same codebase can be written and deployed cross-device and to the web with Cocos2d-html5.

With Spidermonkey and the bindings wrapping native implementation of Chipmunk physics lib, etc. the performance is great.

I've had proof of concept working with ClojureScript on Cocos2d-x for a couple weeks now.

Good code always comes with the requirement of figuring out which bits to optimize. OP seems to be confused if he thinks none of the bits in a game can be delegated to higher layers.


Component-Entity System designs are not at all related to Clojure or functional programming languages. They got their origin in MMO engine design, and have since become popular with game engine theorists.

You could implement a component-entity system in C or C++ just as easily.


I see these games as proofs of concept.

Is there any inherent reason why you couldn't write a performant game in ClojureScript, if you took the time and energy to focus on performance and polish as much as with other games?

I don't think the point of these articles is "if you want to write a game better do it in Clojure with CES" but "look at this interesting programming concept"


> Is there any inherent reason why you couldn't write a performant game in ClojureScript

From the parent:

if you want to run at a reasonable framerate you need to perform quite some logic in about 16 milliseconds[...] simply "iterating" over a seq made of a hundred elements in Clojure apparently takes more time than that.


But that was the point - you can write it cleanly in ClojureScript, experiment, find bottlenecks, then drop down to Javascript for the tight parts, and clean it up (beautifully) with macros.

There's obviously a tradeoff here with immutability/correctness/speed, but when you need to iterate over thousands (or hundreds of thousands) of items, you can get 'native' speed when you need it.


FWIW, this is exactly what we did and chromashift has native JS level performance.


I also worked for a while in the entertainment industry (AI behaviors, some graphics, some control of motion platforms, sound, etc.), our clients were mostly Nintendo and Disney.

On my first day on the job I was assigned a maxed out SG Reality Engine and I installed Scheme on it - figuring that I would want a scripting language for fast changes, etc. I quickly realized that was bullshit, and and used C++ for everything for performance.

Today, I do about 2/3 of my work in Clojure but if I were ever to work on games and virtual reality projects again I would go back to using C++.


Ignoring the fact that this is a huge post with absolutely nothing of value in it...

> And not a single one of these game is using that "Component Entity System".

That's plainly wrong. The idea that this person could know the internals of every game being made right now is silly enough. But, having myself worked on AAA games in the recent past that used component-based architectures, I can say with authority that this guy is Full Of Shit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: