Hacker Newsnew | comments | show | ask | jobs | submit | ZenPsycho's commentslogin

15 years ago? the ES4 stuff happened in 2009-2010. that was 5 years ago.

As for whether the failure of ES4 was Microsoft's fault, no it wasn't. It was ES4's fault. Microsoft and Douglas Crockford merely pointed out the irreconcilable mistakes in design in ES4. The last thing Javascript needed was more bungled up mistakes and weird things in it. In particular, the issues with ES4 were around the packaging system and how it combined with namespaces. Upon close examination the design, lifted straight out of actionscript, just wasn't going to work on an ASYNCHRONOUS web.

Works fine in a binary blob of flash all downloaded in one go. But when you have to download individual files separately all hell would break loose.

-----


ES4 wasn't all or nothing.What does the type system or the class system has to do with asynchronicity? nothing.And by the way,one can totally LOAD classes and packages asynchronously in AS3. So that's not a valid argument. Crockford didn't like ES4 because it's Crockford like he doesn't like a lot of ES6 features, like he doesn't like the NEW operator. As for Microsoft they were just not interested in working on IE anymore, they didn't give a damn and just wanted to push Silverlight everywhere.

That's the story here. So your account of the events is a bit misleading.

-----


More ambitious idea: Make the controller a grip on the end of a robot arm.

-----


This seems to be covered by json schema already does it not? http://tools.ietf.org/html/draft-zyp-json-schema-03 see section 6 - Hyperschema, which lays out a way of describing which parts of a JSON object contains href links. This seems more elegant to me.

-----


There's a difference between a right that you are aware that you are giving in exchange for a service, and a right that you are unaware that you are giving. The way in which ad companies track users is so obscure that most people cannot be expected to be giving reasonable consent.

-----


if it's getting to your client via HTTP there will be a way to get at it, legit or not.

-----


Why isn't HBO's business failing a palatable option? Businesses fail all the time, why can't HBO?

-----


I'm not sure if this answers your question, but HBO produces incredibly high-quality content that a lot of people fear would go away.

-----


replicating native file system functionality is not necessarily desirable from a ux perspective.

-----


It's not something that everything should be able to do, but it needs to be possible.

It doesn't actually matter in the long run what the specific capabilities are, The important thing is for web apps to be genuine apps that matter they need to do anything that would be allowed from native code.

I don't actually mind them making a crippled platform, as long as they admit that it is a crippled platform that can't do much of what I want.

Selectively disallowing certain behaviours is really something that should occur at a different level that affects native and HTML/JS apps equally.

-----


I think that's the theory behind Chrome OS.

-----


Except Chrome OS is developed by people who are motivated to hold your data for you. I don't have a lot of faith in Chrome OS's ability to be independent of a server model when it so much in Google's interest for you to be effectively tethered to their servers.

-----


have you seen this? http://www.w3.org/TR/FileAPI/

-----


yes

-----


Why? What are we gaining by cramming everything into the browser? We already have OS'es, networking stacks, and technologies for remoting out the UI of server based apps, as well as techniques for delivering code to where the data lives. Why not just use a tool that was designed for the task in question, instead of building some unholy golem-like chimera of parts and bits and pieces cribbed from here and there...

Seriously, a modern web-browser /web-app combo seems like something better suited for a Lovecraft story (I'm thinking "Herbert West: Reanimator" in particular) than real life. :-(

-----


So I assume you use a desktop email client, and not gmail, or any other web mail app, then?

-----


Actually, yes, at least sometimes. Web based email clients are useful on occasion, but that has nothing to do with the point I'm trying to make, which is that we could do something better than either "traditional" desktop apps, or golem-like chimera apps crammed into a web-browser.

Why not use the browser for navigating hypermedia and then let the browser handoff to a different app to do things that require richer interaction? It doesn't have to be a pre-installed traditional desktop app... there's the aforementioned JNLP, and who-knows-what-as-yet-uninvented approach. I'd love to see more people spending time on that "as yet uninvented" thing that trying to turn a browser into a crappy X server, and goofing around with nasty, brutal hacks like AJAX.

-----


I think you are overselling X server, I don't know if you've ever tried to use X over a network but it's pretty awful. The web is not a crappy X server. It's a much much much superior approach to the X server- in that it allows code to run in the gui without the latency of sending every single mouse click and key press over the network, and every single low level drawing command and uncompressed pixel put back.

I think what you want has already been tried with Java applets. Java applets had 10 years to establish themselves as the one true way to make professional web apps and replace the OS. It was a monumental failure of epic proportions, and javascript+html won that battle a thousand times over. Now you want to try it again because .. why? Because you think it's a technically better approach. It's just history disagrees with you.

The web can work over a modem and crummy mobile phone connections. X would be hopelessly unusable under such conditions so it's a complete mystery to me why you are claiming that the web is a "poor man's" X-server. You'd have to be half mad to think that. The web could be 10 times more shitty than it is and would still beat X for quality and responsiveness. So really... HUH!? WHAT?

-----


An arithmetic coding scheme which has a model based on the probabilities found in JSON abstract syntax trees would significantly improve on most typically used generic compression schemes. Arithmetic coding schemes have largely been avoided thus far due to patents which have recently expired, if I remember correctly.

using the order 2 precise model on this page I get 190 bytes-- and that is still a generic non-json model. http://nerget.com/compression/

-----


This - JSON specific compression schemes aren't going to yield gains over AST friendly schemes unless the JSON serialization specification changes significantly.

Along these lines - shipping a schema with the data payload is avro-like ... which is also questionable in terms of efficiency when compared with gzip/LZO.

-----


hey look, I found this http://research.microsoft.com/en-us/projects/jszap/

-----


They are using gzip compression level 1. Bogus.

-----


Are you referring to the graph, in which they set the gzip compression as "1" in order to clearly show the ratio of compression improvement that their technique has over gzip?

-----


I see a pretty major problem with this. It seems to depend on the order of key value pairs in object literals being defined. The order is not defined by the JSON or the ecmascript standards. So you can't really depend on the order of keys in a json object, unless you explicitly define some order (like alphabetical, for instance). I like the basic concept of compressing json to json but this is not a particularly good way to do it- since the order of those keys may not be preserved in round trips through json encoders and decoders in various languages.

-----


I saw the same thing right away. Seems like the first item in the array would have to be used as a hash of key/index pairs to be used for the following items. I guess this would only be useful with large sets of data, the examples makes them look kind of silly =)

So this:

    "users": [
        {"first": "Homer", "last": "Simpson"},
        {"first": "Hank", "last": "Hill"},
        {"first": "Peter", "last": "Griffin"}
    ],
Becomes:

    "users": [
        ["first", "last"],
        ["Homer", "Simpson"],
        ["Hank", "Hill"],
        ["Peter", "Griffin"]
    ],

-----


if you can infer a schema, I'd almost prefer a column oriented arrangement.

   "users": 
        {
         "first":["Homer","Hank","Peter"], 
         "last":["Simpson","Hill","Griffin"]
         },
then the information about the original structure can be restored by a set of object paths which need to be "rotated" from column to row orientation. ["users"]

saving a few characters in the process. though I see that the advantage of this system is supposedly that it can handle any sort of shape of data, not just ones with a fixed schema. I've been trying to figure out how trang [http://www.thaiopensource.com/relaxng/trang.html] does its schema inference trick. (it turns a set of xml files into a relaxNG schema). If you have a schema for a JSON file, it's knowledge you can apply to algorithmically creating really efficient transformations.

-----


Exactly, ZenPsycho, the purpose of RJSON is to convert data with dynamic schema. Fields with default values are omitted often, for example if most of data have 'private' property set to False, it have sense to output it only for 1% of objects with 'private' set to True. This issue is addressed in RJSON.

-----


Prefer this approach. It's more readable and less fragile.

-----


Doesn't this introduce ambiguity? How do you represent list of 'tuple lists'?

-----


You can see it in the unit tests: https://github.com/dogada/RJSON/blob/master/test/tests.js#L4...

-----


you could steal the method rjson uses and do this

   "users": [
        [3, "first", "last"],
        [3, "Homer", "Simpson"],
        [3, "Hank", "Hill"],
        [3, "Peter", "Griffin"]
    ],

-----

[deleted]

that is only enough to make it seem like it works okay. But as I said, there's nothing in the JSON spec that obligates any intervening party to preserve the order of the keys in the object.

-----


It doesn't matter. Even if order of keys will be changed somewhere in the middle during JSON.strigify, as schema id will be always used keys sorted alphabetically and object values are always stored in the same order as keys sorted: https://github.com/dogada/RJSON/blob/master/rjson.js#L213

-----


Thanks ZenPsycho, I added sorting of object schema keys to fix this issue: https://github.com/dogada/RJSON/commit/a27c8927cd0c2d7d151e2...

-----

More

Applications are open for YC Summer 2015

Guidelines | FAQ | Support | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: