Hacker News new | comments | show | ask | jobs | submit login

FB prepends a "for(;;);" which is 1 char shorter than "while(1);", has been the case since 2012/13.

Firebug v2 and ChromeTools know how to parse such JSON and ignore that first part. (IE11 and Firefox newer DevTools can't "handle" it aka show just a plain text string)




'for(;;);' probably compresses better than 'while(1)' too. Semicolons are (very :) common in JavaScript code and the for idiom repeats them three times.


Why does it have to be a loop, couldn't you make a reliable syntax error in less than 8 characters?


The risk there is some parsers might carry on past the syntax error and try to continue parsing. This is JavaScript after all.


No, that’s not a real risk.


I’m not sure why this is downvoted. No JavaScript engine does that. “This is JavaScript after all” is ridiculous FUD.


I was sure I had used browsers which did that, if I didn't then sorry, I must be hallucinating.

I wouldn't call it FUD, I'm not suggesting don't use JavaScript, and we are already talking in this article about one crazy workaround because of the weirdness of modern jazz development!

The "this is JavaScript after all" referred to JavaScript tending to continue after errors (which it does in some cases, like a bad callback, or a whole file which didn't parse).


> one crazy workaround because of the weirdness of modern jazz development

Autocorrect of “JS”? If so: it’s not a modern weirdness; this is an old, long-fixed browser bug.

FWIW, JavaScript continues executing after errors in cases where it makes sense. If an event listener throws an error, it doesn’t make much sense for it to stop all future events without crashing the page (which is kind of what IE used to do with its script error message box, and we know how that turned out).


The browser may disclose part of the JSON content in a "parse error" error message. A window.onerror handler could catch this message.

I believe that some browsers used to do that some times ago.


The offending website could have error handling to catch and discard the syntax error (e.g. global uncaught exception handler). They wont be able to read the JSON, but otherwise they'd be OK.

But getting hit with this, they would be actively hurt, and I don't think that interpreter session would be able to recover.


You can, and Google does. For example:

https://fonts.google.com/metadata/fonts

And here's the code in Angular's $http service for stripping out that string:

https://github.com/angular/angular.js/blob/master/src/ng/htt...


8 chars is already pretty short. If you're concerned about the length, don't be. A TCP packet is at least 512 bytes.


This is definitely the sort of thing you'd want to gather data on. It's plausible that it could save Terabytes of bandwidth per day at Facebook/Google scale.


That's like saying a car wash can save 3 drops of water at the end of the day.


An extra character will cause 1/512 of responses to take an extra packet, so the amortized cost is still one character per response. Presumably this matters at scale.


Not if your average response is less than 512.

If your responses are all between 505 and 512 in length then it might matter but most likely you are prematurely optimizing.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: