This looks interesting, but I wish there was an explanation of how it works; I'm particularly curious about how it extracts repeated substrings. I tried reading the code, but it starts off with:
let X, B, O, m, i, c, e, N, M, o, t, j, x, R;
and doesn't get any more readable from there. (And that's the unminified version!)
There aren't any tests, either, so if you're using this and find a bug I guess you just have to hope that nothing breaks when you change a line like
If you go to the live demo, it will do a test to crush/encodeURI and uncrush/decodeURI to the string you pass in and verify that they are the same. Maybe I should write some automated tests.
I assume they minified it, but I don't have access to the unminified source. I'd like to clean it up or find a cleaner version, but for now it works well enough.
The article you linked above contains a partially unminified version.
EDIT: On second thought, you'd probably be better off using a completely different compression algorithm that doesn't sacrifice performance for golfability.
The compression works by identifying repeating subsequences in the input data and picking the one that gives the best savings if replaced by a single byte not yet part of the input. This step requires time quadratic in the length of the input. After replacing the subsequence and tacking it onto the end so it can be recovered, the process continues until no repetition can be found or all possible bytes have been used.
It can beat LZ's compression ratio for two reasons:
1. The input consists of only bytes that are valid in an URI, so it doesn't need an additional encoding step.
2. It gets very slow very quickly for larger inputs.
thank you. I was running into a JSON parse error on a script for a couple of days. I love JSON but this compact notation using rison helped me quickly debug the error! :-)
Looks cool, I just checked into it. It does work well, but doesn't appear do anything to reduce repeated substrings. Maybe a combination of the two would work well though.
I think a more 'popular' pattern is is to base64 encode the JSON. This is how we do it (tracking pixels/get requests), and lots of analytics/ad tech uses that pattern as well. But I'm not sure on the compression/size.
Encoding to base64 increases the size by at least 33%. This is to help make links small enough to share on places like twitter, dischord etc where the max length is around 4000 bytes.
Thanks, that is good reference. It makes sense that Zip would beat out JSON for longer strings. Just as another point of comparison, how about encodeURIComponent(gzip(input))?
The zzart format looks ridiculously wordy. A lot of compression could be achieved by shortening the property names. I wonder how much BSON would provide in size advantage but it'd have to be encoded to something amenable to a URI.
Idea: just put the thing in localstorage? It's cheaper, less fiddly, you can encrypt it with something like TEA if you so desire, and doesn't make urls unshareable by existing
Yes, localstorage is great, but this for making URLs that are short enough to share on twitter, dischord, etc. The max twitter url length is ~4000 characters I think.
I was looking for something like this last week so was interested to try this. It's just crashed in Firefox with the fans also kicking up to max.
It worked in Chrome with a 31% reduction in size, but took a while, and the fans kicked in again.
Yes, I'm interested in trying to make it work with longer strings. Worst case scenario if it gets long strings it could just split it up into strings of length that it can handle. The speed seems to decreases rapidly with string length at some point eventually hitting a brick wall, the sweet spot seems around 1000-5000.
I do that because I wanted users to be able to save their state without burdening those users with accounts or burdening myself with maintaining a DB for something so simple.
I also somewhat abuse the history API and use my "read the URL and load state" logic to implement undo-redo via navigating back and forth, though that doesn't seem to work right now. I am working on a refactor that uses redux to implement undo-redo and just replace state, to keep the user's history clean.
Storing encoded JSON in the URL hash is a nifty hack in my opinion. Users can save state in a bookmark or share it with others easily, and it's clear to the users that "where they are" in the URL bar maps to the current app state. Plus, bookmark syncing is taken care of by most browsers to make that state available elsewhere, etc. For the site owner, it means not needing a DB to make an app with some kind of state persistence.
One risk: be sure the state you persist to the URL is in a schema you plan to retain compatibility with! Blind serialization and de-serialization is a recipe for bugs and misery the next time you add a feature.
I do the same on a production app at work, but for a different reason.
Storing confidential data in the hash assures my users that I (the developer) don’t have access to this data, since anything after the hash never gets sent to the server by the browser.
It wouldn’t stop me from sending that information with a post request afterwards but the code is open source and it could be noticed in developer tools.
Then the web browser sends a message to the server that looks something like this:
GET /webpage?q=54 HTTP/1.1
Host: www.example.com
Cookie: well maybe there's a cookie here
Although it's encrypted with ssl and there's some hopefully irrelevant messages along with it (which aren't irrelevant so they can be used to fingerprint you).
As you can see, the bit that comes after the hash isn't ever sent from the client to the server. It was originally meant so that you could link to a particular section of a longer web page, so it was quite irrelevant.
Nowadays it's exposed to javascript. This means that the code can rely on it - it can read and set it. The javascript author could read it and use it in an entirely inbrowser javascript app. Or the javascript author could read it and send it to the server in a more secure channel, like the body of a POST request, to reduce the chances it gets stored in server logs.
But what comes after the hash is never processed by any standards compliant web server not transmitted by any standards compliant web browser/client.
One use case where I've put (uri-encoded) JSON in the query string is for parameterizing GET requests with more structured data, since URL encoding is fairly limited and GET requests don't have a body like POSTS.
You can probably use extended URL encoding for that, which is supported by at least body-parser [https://npmjs.com/package/body-parser]. Not sure about other PLs.
As far as I know base64 is not guaranteed to be uri safe, though most of the time you should be fine. However more importantly converting to base64 automatically increases the size by 33%
The parent poster was referring to the "base64url" variant of Base64 which uses `-` and `_` as the two special characters and leaves out the useless padding, making it url safe. (The 33% expansion still applies though)
From what I've seen, many intermediaries do nasty things if you have a body on anything that doesn't 'normally' have one. I've heard a fair amount about GCP's load balancers dropping unexpected bodies, and other tools giving a 5XX.
It is really unfortunate, because there are tons of use cases and zero reason to interfere with these requests.
The justification is also downright ridiculous. The argument is that "GET, DELETE, ... have no defined semantics for bodies". Meanwhile the 'defined semantics' for POST bodies is... whatever the application decides.
I made this exact argument in session last week in the HTTP working group at the IETF while discussing the semantics of DELETE. The working group chair is quite passionate defender of the position that GET should never have a body, and in the end I was outvoted by the room.
I'm slowly coming around to the idea that he might be right. The problem is the (semantic) question of what resource is being discussed. The semantics of GET, HEAD and OPTIONS are that (unless it gets modified) the same resource should always be answered the same way. If the resource that we're asking about is identified by the URL + the body, then those requests (and DELETE) all need a body too. And then there's an open question for PUT and POST about what resource exactly is being modified by the request - although as you say, the semantics are whatever so thats less important.
I think HTTP has a problem in that its hard to express a resource name that is complex. Like, "the records which match this specific elasticsearch query" is a hard thing to pack into a URL. If HTTP was different, I could imagine a second body-like part of each request called the "resource section" with its own Resource-Type: application/json header and so on. But instead we just have a URL, so we end up doing awful hacks like packing JSON into URLs and make them unreadable. The URL is long enough for this sort of thing - implementations have to allow at least 8k of space for them, and can allow more space. But they're hard to read and display, and there's no standard way to pack json in there.
So I wonder it'd be worth having a formal, consistent way to encode stuff like this in the URL. I'm spitballing, but maybe we need a standard way to encode JSON into the URL (base64 or whatever) defined in an RFC. Then if you do that, you can add a "Query-Type: application/json" header that declares that the URL contains JSON encoded in the standard way. Then we could at least have consistency and nice tooling around this. So like, when you're making a request you could just pass JSON into your http library and it'd encode it into the URL automatically in the standard way. And when the URL is being displayed in the dev tools or whatever, it could write out the actual embedded JSON / GraphQL / whatever object that you've packed in there in an easy to read way.
I don't have too big of a probably with avoiding GET bodies. What I do have a problem with is there's no way to POST an application/json body cross-origin without a preflight, which kills latency. So I'm forced to resort to text/plain or similar hacks.
The OPTIONS response should be cached by the browser if the headers are being set correctly in the response. If you want to avoid the latency cost you can proxy the POST request via your own backend, though you can't send authentication credentials from the user's browser to the 3rd party site that way.
Using text/plain only exists as a backwards compatibility thing. I wouldn't be surprised if that stops working at some point, since it breaks the security model.
There aren't any tests, either, so if you're using this and find a bug I guess you just have to hope that nothing breaks when you change a line like