You sent it whatever data you like, be that timestamps, integers, floats, arbitrary strings, etc... and it would let you build an intelligent dashboard displaying the data however you like.
I'd love to see some performance benchmarks and see how many datapoints you can pump in and out of Cube. We've been collecting about 1 million metrics every minute on a speedy dual proc / quad core box backed with SSDs for storage.
The only downside to something like this (and Graphite, etc.) is that you can only visualize data that you store in Cube (if I understand correctly). That's great for things you sample yourself, but what if you want to compare something you sampled yourself with something from New Relic or another data source? Cube looks to go a step further than Graphite in allowing for easier custom visualizations, but data is still a one-size-fits-all thing. You can write an emitter/parser that grabs the data and stores it in Cube, but there's a huge amount of duplication (and essentially polling then).
I think a better architecture is to separate data storage/acquisition from visualization. Establish a set of standards for data interfaces that specify the output they provide, and build your visualization on top of that. Then, your interfaces can be API wrappers, mysql connections, a mongo-db based storage system, a redis-backed storage system, etc.
As an aside, it's always great to see more usage of PEG.js.
Running it, I'm having some issues loading pages that I think has something to do with caching static assets. Why not use an existing static file library?
Other than that, love it!
Yes Protovis is no longer maintained, but here is a good post defending it:
And used Socket.IO for comm (they might)
However, it is lovely, I am going try it out :)