15 years ago? the ES4 stuff happened in 2009-2010. that was 5 years ago.
Works fine in a binary blob of flash all downloaded in one go. But when you have to download individual files separately all hell would break loose.
ES4 wasn't all or nothing.What does the type system or the class system has to do with asynchronicity? nothing.And by the way,one can totally LOAD classes and packages asynchronously in AS3. So that's not a valid argument. Crockford didn't like ES4 because it's Crockford like he doesn't like a lot of ES6 features, like he doesn't like the NEW operator. As for Microsoft they were just not interested in working on IE anymore, they didn't give a damn and just wanted to push Silverlight everywhere.
That's the story here. So your account of the events is a bit misleading.
This seems to be covered by json schema already does it not?
see section 6 - Hyperschema, which lays out a way of describing which parts of a JSON object contains href links. This seems more elegant to me.
There's a difference between a right that you are aware that you are giving in exchange for a service, and a right that you are unaware that you are giving. The way in which ad companies track users is so obscure that most people cannot be expected to be giving reasonable consent.
It's not something that everything should be able to do, but it needs to be possible.
It doesn't actually matter in the long run what the specific capabilities are, The important thing is for web apps to be genuine apps that matter they need to do anything that would be allowed from native code.
I don't actually mind them making a crippled platform, as long as they admit that it is a crippled platform that can't do much of what I want.
Selectively disallowing certain behaviours is really something that should occur at a different level that affects native and HTML/JS apps equally.
Except Chrome OS is developed by people who are motivated to hold your data for you. I don't have a lot of faith in Chrome OS's ability to be independent of a server model when it so much in Google's interest for you to be effectively tethered to their servers.
Why? What are we gaining by cramming everything into the browser? We already have OS'es, networking stacks, and technologies for remoting out the UI of server based apps, as well as techniques for delivering code to where the data lives. Why not just use a tool that was designed for the task in question, instead of building some unholy golem-like chimera of parts and bits and pieces cribbed from here and there...
Seriously, a modern web-browser /web-app combo seems like something better suited for a Lovecraft story (I'm thinking "Herbert West: Reanimator" in particular) than real life. :-(
Actually, yes, at least sometimes. Web based email clients are useful on occasion, but that has nothing to do with the point I'm trying to make, which is that we could do something better than either "traditional" desktop apps, or golem-like chimera apps crammed into a web-browser.
Why not use the browser for navigating hypermedia and then let the browser handoff to a different app to do things that require richer interaction? It doesn't have to be a pre-installed traditional desktop app... there's the aforementioned JNLP, and who-knows-what-as-yet-uninvented approach. I'd love to see more people spending time on that "as yet uninvented" thing that trying to turn a browser into a crappy X server, and goofing around with nasty, brutal hacks like AJAX.
I think you are overselling X server, I don't know if you've ever tried to use X over a network but it's pretty awful. The web is not a crappy X server. It's a much much much superior approach to the X server- in that it allows code to run in the gui without the latency of sending every single mouse click and key press over the network, and every single low level drawing command and uncompressed pixel put back.
The web can work over a modem and crummy mobile phone connections. X would be hopelessly unusable under such conditions so it's a complete mystery to me why you are claiming that the web is a "poor man's" X-server. You'd have to be half mad to think that. The web could be 10 times more shitty than it is and would still beat X for quality and responsiveness. So really... HUH!? WHAT?
An arithmetic coding scheme which has a model based on the probabilities found in JSON abstract syntax trees would significantly improve on most typically used generic compression schemes. Arithmetic coding schemes have largely been avoided thus far due to patents which have recently expired, if I remember correctly.
I see a pretty major problem with this. It seems to depend on the order of key value pairs in object literals being defined. The order is not defined by the JSON or the ecmascript standards. So you can't really depend on the order of keys in a json object, unless you explicitly define some order (like alphabetical, for instance). I like the basic concept of compressing json to json but this is not a particularly good way to do it- since the order of those keys may not be preserved in round trips through json encoders and decoders in various languages.
I saw the same thing right away. Seems like the first item in the array would have to be used as a hash of key/index pairs to be used for the following items. I guess this would only be useful with large sets of data, the examples makes them look kind of silly =)
then the information about the original structure can be restored by a set of object paths which need to be "rotated" from column to row orientation.
saving a few characters in the process. though I see that the advantage of this system is supposedly that it can handle any sort of shape of data, not just ones with a fixed schema. I've been trying to figure out how trang [http://www.thaiopensource.com/relaxng/trang.html] does its schema inference trick. (it turns a set of xml files into a relaxNG schema). If you have a schema for a JSON file, it's knowledge you can apply to algorithmically creating really efficient transformations.
Exactly, ZenPsycho, the purpose of RJSON is to convert data with dynamic schema. Fields with default values are omitted often, for example if most of data have 'private' property set to False, it have sense to output it only for 1% of objects with 'private' set to True. This issue is addressed in RJSON.
It doesn't matter. Even if order of keys will be changed somewhere in the middle during JSON.strigify, as schema id will be always used keys sorted alphabetically and object values are always stored in the same order as keys sorted: