Hacker News new | past | comments | ask | show | jobs | submit login
How to store your app's entire state in the url (scottantipa.com)
640 points by escot 18 days ago | hide | past | favorite | 395 comments

App state in URL can be a good idea, but if possible I prefer readable path/query parameters instead of unreadable base64 encoding.

As one comparison, this is Google Finance encoding stock chart parameters:

Versus Yahoo! Finance doing the same:

In both examples, the only custom parameter is setting the time window to 5 years.

Here is it decoded:

{"interval":"week","periodicity":1,"timeUnit":null,"candleWidth":4.3486590038314175,"flipped":false,"volumeUnderlay":true,"adj":true,"crosshair":true,"chartType":"line","extended":false,"marketSessions":{},"aggregationType":"ohlc","chartScale":"linear","studies":{" vol undr ":{"type":"vol undr","inputs":{"id":" vol undr ","display":" vol undr "},"outputs":{"Up Volume":"#00b061","Down Volume":"#ff333a"},"panel":"chart","parameters":{"widthFactor":0.45,"chartName":"chart"}}},"panels":{"chart":{"percent":1,"display":"F","chartName":"chart","index":0,"yAxis":{"name":"chart","position":null},"yaxisLHS":[],"yaxisRHS":["chart"," vol undr "]}},"setSpan":{"multiplier":5,"base":"year","periodicity":{"period":1,"interval":"week"}},"lineWidth":2,"stripedBackground":true,"events":true,"color":"#0081f2","stripedBackgroud":true,"eventMap":{"corporate":{"divs":true,"splits":true},"sigDev":{}},"customRange":null,"symbols":[{"symbol":"F","symbolObject":{"symbol":"F","quoteType":"EQUITY","exchangeTimeZone":"America/New_York"},"periodicity":1,"interval":"week","timeUnit":null,"setSpan":{"multiplier":5,"base":"year","periodicity":{"period":1,"interval":"week"}}}]}

I bet if they were more aggressive in using defaults it would be a lot smaller.

OT, we used to joke about LISP source code contains mostly brackets. Nowadays we literally store, transform and pass around willy nillily

Speaking about LISP and state in the URL, this is how OVH stores the state when you buy a domain from them:


That's Clojure's EDN encoded for ClojureScript isn't it? I recognize this syntax from briefly working on a CLJ/CLJS app.

Nope, edn uses square brackets and doesn't have a `list` keyword. This is some custom representation.

Probably written in Common Lisp? Based on some quick searching I found this reference https://docs.ovh.com/us/en/web-paas/languages-lisp/

Or it could just be that s expressions are easy to parse.

we used to joke about LISP source code contains mostly brackets.

There was a programming language in the 1970's called SAM76 that didn't care how many closing parentheses you had, as long as it was more than the number of opening parentheses.

So, something like this was perfectly valid:


Relevant xkcd: https://xkcd.com/859

That's genious IMO. Making sure the amount of opening and closing parantheses is equal is annoying and unnecesary.

It screams "trust me all good with the closing parantheses".

I disagree. I would say when I get an unbalanced paren compiler error, it's more common for the omission to not be at the end of the line. Having the compiler silently swallow the error would lead to serious bugs very quickly.

    if ((v1.x()-v2.x())*(v1.x()-v2.x())+(v1.y()-v2.y())*(v1.y())-v2.y())+(v1.z()-v2.z())*(v1.z()-v2.z()) > distance*distance) {
        // collision
Would you like this to fail to compile? Or would you prefer it to silently return the wrong value?

I'd prefer the compiler file an HR complaint against the author.

if collisioncheck(v1,v2, distance) { // collision }

where you have defined an actually readable equivalent inside of collisioncheck would be my prerogative... I also agree paren matching failures should error, but your example is NOT why.

What if I wanted to copy that expression and use is as part of a different expression?

Interactive prompts already do that. Eg if you google "2+2)*3" it will correctly print "12". This effectively lets you append operations to an expression without editing both sides.

On the other hand, auto closing parentheses in code is a terrible idea for obvious reasons. There are only like 7 languages used in industry and they all have excellent tooling that integrates with your editor to eliminate issues like this.

paredit and rainbow parens makes it a non-issue for me.

I found that joke funny until I realized that, if you sum the count of round, square, angle and curly braces together, your typical piece of C++ or Java code has more brackets than equivalent piece of Lisp code...

Right, but having them all be the same is what people are joking about :) Although I might check the lisp/clojure vs java code on techempower or another rosetta stone like example.

That, and the usual style of stacking all closing parentheses together at the end of the last line of a sub-expression / statement / whatever. People sometimes do this in Java and it's just as unreadable.

You read it by indentation not by counting parentheses.

It's still very easy to make mistakes reading it that way. If you are reading it by indentation anyway you may as well use a parser that closes parentheses for you at indentation changes. (Like the ML family and Haskell and Python all decided to do.)

I don't find reading properly indented Lisp any harder than reading properly indented Python. I agree however with respect to having the tooling help with balancing delimiters, although I let the editor do that job instead of the parser. And that's one of the major advantages of Lisp. Structural editing is considerably easier to implement. Parinfer[1] for example is really cool. I have to say I'm really excited about tree-sitter allowing similar structural editing for harder to parse languages.

[1] https://shaunlebron.github.io/parinfer/

Well yeah that's a major reason why most programmers hate LISP syntax. It uses a parentheses for everything even though every keyboard has round, square, angle and curly braces.

I somehow doubt it. I tutored quite a few people of all ages who started to learn programming, mostly C, C++ and Java - they all initially have been mostly confused by the different kinds of brackets. They quickly understood where the brackets go (scopes/blocks, function calls, array declarations and access, etc.), but took some time to internalize which bracket type is used in which context.

I suspect the main reason programmers dislike Lisp syntax is that they try or see an example of some basic arithmetics, get surprised by the prefix notation that's so "unnatural" compared to the "obvious" infix notation they learned in primary school, and write off the entire language as weird, not noticing the fact that infix math is an exception in programming languages, as otherwise almost all code they write in any other language is in prefix notation too, just with a parenthesis shifted to the left.

A somewhat mind-blowing observation to some people:

  (foo (bar baz quux))
  foo(bar(baz, quux));
The two lines above are both in prefix notation. You can go from the first to the second one by a) shifting each left paren by one token to the right, and b) adding commas between whitespace-separated tokens.

I suppose beginner programmers might find all the different kinds of brackets in C++ confusing but after some experience, it helps a lot with readability. If I see `a[1]` then I know that `a` is probably an array. If I see `a(1)` then I know that `a` is probably a function. You could make the argument that those are terrible names for variables and methods and you'd be correct, but every small bit of context helps.

Like most of the verbosity in those languages it benefits the reader more than the writer.

Clojure uses [] and {}, eg. :

  (defn fib [x, n]
    (if (< (count x) n) 
      (fib (conj x (+ (last x) (nth x (- (count x) 2)))) n)

I suspect it's probably a global library that handles this. So refactoring it in one place would be difficult. If I had to guess their implementation looks something like this.

SomeUrlService.pushState(JSON.stringify({ foo: 'bar' }));


Likely, that "SomeUrlService" is also tacking on plenty of other shit to the URL that is passed along. I don't work for Yahoo and never have, so this is all pure speculation.

Having worked there years and years ago, I doubt it. All those parameters look like chart parameters to me, probably local to Y! Finance.

What is the point of the encoding ? Is it obfuscation for the common user ? What's wrong with http:// url.com/api/?interval=week&periodicity=1... since the encoded version is not shorter.

Encoding isn't usually to obscure but to allow using characters not allowed in a URL or wherever text is being stored. This makes it a bit easier to directly take that encoded text later and decode it into something like json.

That makes sense, thank you

> What is the point of the encoding ?

Absence of special characters. No need to escape symbols like {}:/\?& etc, etc.

They have a nested structure, it's a pain to encode that in query parameters.

> https://finance.yahoo.com/quote/F/chart?p=F#eyJpbnRlcnZhbCI6...

Dear god... this link was real. I thought it was like a joke or something

Genuinely curious to hear why you are so shocked. Long, ugly, user unfriendly urls are really prevalent. Arguments can be made even query params are pretty ugly.. Are urls expected to be a good UX these days? What's the big deal?

Just because a thing is prevalent doesn't mean it's good. I have a hard time believing all those parameters are completely necessary for the amount of data being displayed. Often times it feels like people just base64encode(page.state) and call it a day.

I think it's advantageous to keep the complexity and specificity of information in URLs to a minimum outside of what's needed to retrieve the data set as it can make backwards compatibility easier - it is valuable (for long lasting tools) to have URLs that users can favorite and share for their common searches.

> Just because a thing is prevalent doesn't mean it's good.

I agree with this sentiment. I'm not sure I've ever seen a url referred to as "good", I was just a bit confused as to why someone would baulk at a URL. In my opinion everything after the host portion of the url is to serve the apps needs, so it knows where to go or what to do, it's not really an interface for the user as obviously there are better ways to present that.

I’d say everything after the host is part of the UX, and should serve the user needs. For example:

* Should be book-markable.

* If searching with a query, should have a clear query parameter for ease of browser integration.

* If the URL contains slash dividers that can be read as categories (e.g. storefront.com/department/category/product), then deleting everything after a given slash should go to that category’s page.

* The link should be short enough to be sent over a plain-text conversation without becoming a wall of text.

* The link should be short enough that it cannot hide a malicious subdomain (e.g. Google.com.flights.malicio.us/more/elements).

* I should be able to guess if you're sending me a link about composting or a Rick Astley video

Opaque slugs are problematic in a low-trust world. Phishing attempts are easier when the parameters are indecipherable. I'm not clicking on that shit in a text message or an email. I've seen what happens to people when they do.

Eh, I mean URL shorteners are a thing and all kind of things redirect on the web. The length of the URL doesn't really tell you much.

URL shorteners are a menace. I'm not clicking on the link if you send me a shortened one, especially not when on mobile. These days, shortened links have two primary applications: forcing you to go through a tracking/analytics gate, and serving single-click exploits. Neither of those is serving the user (unless you read it in "To Serve Man" sense).

No - but they have been referred to as "cool" before. Gotta love the early 90's.


The point of a URL is that I can copy+paste it into IRC so my friends can look at the thing I want to share with them.

If the URL has a bunch of goop in it, people are going to make fun of me for it, especially if it's super long and doesn't fit in a single line. Or if there's stuff in there that's specific to me and not the thing I want to show them.

Where I work, we consider URLs part of the UI. Users should be able to share links with each other without pasting a wall of text. We also try to make all URLs human parsable so that they know what they are sharing.

I can say that many, many times, I have zoomed in, picked date ranges, added options, then pasted the yahoo finance url to others... and they have all those same configs passed.

It's awesome and incredibly handy.

Yahoo is on the ball here.

Agreed. However they could still simplify the urls a ton by including only non-default parameters, and still achieve the same thing.

Everything in chart is customizable, so no, its not possible to achieve the same thing in the way you are saying

Yes, but anything that hasn't been customized doesn't need to be included in the URL, so most URLs will be much shorter by only including non-default values.

And what problem does this solve? Hint, it doesn't. Long URL is ugly but who cares. Life is too short. Nice or ugly is relative. It stops nobody from using the tool or sharing links. It's a non-problem really.

I don't think people care so much about a messy address bar as a messy chat/email message after pasting a URL. Shorteners and vanity URLs exist, but that's more friction than just copying from the address bar as-is.

This can be overcome with html, markdown, and other rich text formats that let you specify the visible link text (missing from many chat apps*), but that's also friction to compose compared to automatically linking from just a URL.

*Slack's implementation is awesome: type the link text, highlight it, and paste a URL.

> I don't think people care so much about a messy address bar as a messy chat/email message after pasting a URL.

But it's painful how few care about that!

I always clean them up before I send them (just manually - typically meaning all query parameters, e.g. with Amazon the text description after the actual ID) rarely even feel the need to check they still work, it's pretty easy, but maybe I shouldn't bother because it's only me that cares; they don't seem to notice that their links occupy half the screen and my don't!

It's not even only about messiness. "https://news.ycombinator.com/item?id=34315441" isn't some horrible long base64, but it's not a good UX either: when you paste it in a messenger, or in your notes, by default the reader can't tell what's it about without clicking.

> *Slack's implementation is awesome: type the link text, highlight it, and paste a URL.

Yes! Matches my vision of hypertext perfectly: I usually type a message in normal human language and then paste links all over it for the reader who needs to dig deeper.

> by default the reader can't tell what's it about without clicking.

Most chat apps like Discord will automatically include an embed generated from a URL's metadata when present in a message.

At least in Telegram, this works for at most one link per message, and quickly fills the screen if you send multiple such messages.

And I believe that there is no standard API to get previews of non-public websites in a work chat, you need to develop site-specific bots or something like that?

> previews of non-public websites

It would be interesting if browsers / OSes / apps offered a way to use the browser's cookie jar / basic auth headers / whatever sessioning scheme in whatever apps the user decides should display authenticated previews. Ask the user to allow the release of browser data scoped to the domains that the app suggests. Similar to how some Android apps can put up a prompt to release a Chrome-saved password to an app (which I really wish more apps would include).

Examples of why this is bad:

* http://example.com/settings/transferownership?to=efreak * http://example.com/settings/deleteaccount?confirm=true * http://example.com/sendmoney?to=efreak&amount=10000

Might need a redirect. Do previews follow redirects? Does your poorly-written chat app filter urls by protocol, or does it just look for a token with '://'? A good app will allow me to link to things like steam://connect/IP so I can invite you to a game.

* tel:9005555555 (call a phone number. Fortunately, this requires confirmation these days, but what if you're on desktop or the app that does texting can directly make calls without invoking the dialer?) * steam://exitsteam (steam will exit if it's running)

I just include a screenshot. It’s not that hard.

Plesse no

There is a standard for this. It's called oEmbed: https://oembed.com/

The chat host still likely needs access to the "non-public website" to discover and call its oEmbed API, so that's something you'd have to figure out, but any website anywhere can offer oEmbed metadata if it wishes.

> Are urls expected to be a good UX these days? What's the big deal?

I think they are. If I want to go back to some page I was looking at earlier, it's really nice to be able to find that page in my history. That Yahoo URL is going to be hard to differentiate if you looked at a few stocks.

It's also handy if the URL's are predictable enough that I can hit one directly rather than having to go through search. Good for me because I don't wait on page load, good for them because they don't have to serve my search request.

It's pretty low on my list of UX annoyances, but it is there.

Pipe that into bash (ie: the user doesn’t know if they’re loading malicious state)

I Just like to keep query params simple and readable. I understand the possible use case for this obfuscated url which would make it difficult to scrape the site

In your example, absolutely the first URL is better. But if as in the case of the OP, you're trying to encode an entire flowchart, I would argue it's a bit of a pointless exercise. Maybe you could come up with something reasonably readable, but I don't think anyone would care. It really depends on what kind of state you need to store.

Whoever needs to copy that long intestine of a url definitely cares.

If you can shorten the URL, sure. But if data stored in the URL means it needs to be incredibly long, I don't think most users care whether it's readable or not. Of course, at a certain length, it's worth reconsidering whether storing all the data in the URL is a good idea.

A flowchart can be encoded as simply the name of the state you're currently in.

How do you know what all the other states in the chart are and how they are connected?

How would you predefine a state to include the flowchart nodes that the user creates?

I decoded that, gzipped --best, then re-encoded and got something that's shorter at least


you could come up with a better compression scheme given that the dict keys and many of the values are probably common across everyone even if they don't repeat multiple times for a given user. a lot of flexibility would be lost though, and you would need to always update the scheme in a backwards-compatible way.

Once you've already given up on human-readability, it probably doesn't much matter how long/gross the result is. Rather than spending effort and adding complexity, just provide a built in URL shortener.

Isn't storing state in urls and then using a url shortener just a key/value store with extra steps?

But with added session id energy too!

True, though I think the differences around shareability matter— a mapping-table entry is immutable, whereas even a non-logged-in session token is probably not.

Well, there are limits to the lengths of URLs in much of the internet’s infrastructure, so better compression could still be useful.

This is the critical point to make. Paste a few long texts into the boxes and the url goes into the hundreds of thousands of characters long, whereas this says that anything more than about 2000 chars puts you into uncertain territory:


I agree, same reason people use json in the first place despite the inefficiency, it's easier.

Could start with a smaller encoding. Use single-length keys, or better yet use protocol buffers or even ASN1. That's similar to taking advantage of common key names.

would it be feasible to include the compression dictionary along w/ the data? And, maybe just send it once on the initial request and store it on the client?

> would it be feasible to include the compression dictionary along w/ the data

In theoretical terms (in particular Kolmogorov-complexity perspective), I believe the inclusion of a dictionary wouldn't help, it would strictly increase size. It can help (specially from a Shannon information perspective) if the compression is too computationally demanding to build the data model (which is kind of what all compressors do); but it almost certainly doesn't help in the case of something as small as an URL. But then it's possible to just pack a common compressor with the website (and just refer to it with a version code).

I think the web in general would benefit greatly from some kind of standard compression libraries. Compression tends to have a significant cold start cost to work well, if that were mitigated certainly something like an url could be far shorter.

I believe HTTP2 has header compression (RFC 7541)[1] with 'static tables', which seems to be a form of dictionary, but surprisingly to me no html compression with such shared dictionaries. There is caching libraries (unfortunately cross-domain caching seems to be deprecated[2] due to security concerns) I believe which helps in other ways, but I think a true dictionary-style compressor would bring huge bandwidth savings.

Major web entities should get together to develop those standard dictionaries (and which algorithms to use them with), so they are reasonably fair for everyone (maybe even language specific dictionaries should be used).

Security seems indeed a concern, but I think the key would be carefully preventing that a content were decoded with the incorrect dictionary (meaning the server could serve a different web page than the standard decompression) -- but overall it doesn't seem like a big issue (similar to cache, less problematic than cross-domain caching).

In the future maybe even one of those fancy neural network (or otherwise machine learning inspired) methods could be an option, specially useful for highly bandwidth constrained like rural or satellite internet (although of course performance is always a priority with mobile devices being major users).

[1] https://httpwg.org/specs/rfc7541.html#static.table.definitio...

[2] https://www.stefanjudis.com/notes/say-goodbye-to-resource-ca... Is this still accurate?

> I believe HTTP2 has header compression (RFC 7541)[1] with 'static tables', which seems to be a form of dictionary, but surprisingly to me no html compression with such shared dictionaries.

To my understanding, that static table in HTTP2 is directly in context of recommending and discussing Brotli compression, which does use a table like that as a standard static dictionary in HTTP2+ scenarios including other static dictionary inclusions derived from a corpus of HTML documents.


Thanks, I didn't know Brotli! Indeed it seems to include a dictionary in its compression which seems to help significantly with small pages. I hope compression continues to improve in this way.

> To my understanding, that static table in HTTP2 is directly in context of recommending and discussing Brotli compression

It seems separate, that would be a header compression for HTTP2 header itself, while Brotli encodes content (usually html, css or js).

The header compression in http2 has atypical requirements placed upon it: contents of different headers in a single request must not impact each other's compressed size[^], lest an attacker able to manipulate one of them can use that to guess the other. Thus, LZ77-style compression is out of the window, as well as using Huffman codes selected based on _all_ the header values. In essence, each header value (in a single request) has to be compressed independently. IIUC the dictionary you reference is used for header keys (as opposed to values).

[^] this statement is a bit of a lie for reasons of simplification

> would benefit greatly from some kind of standard compression libraries

Correction: standard compression dictionaries

The possible advantage of the encoding is that it that it can be encrypted and make parameter hacking impossible.

Would have to do the signing server-side, predicated on the server checking the entire state to ensure validity. Such effort being put into preventing parameter-hacking would suggest there are serious vulnerabilities in that website. The URL is meant to be modified by the client.

> The URL is meant to be modified by the client.

Clarification, the human user probably won't want to modify the URL, but the client will.

With something nice like the Google url, I would happily modify it myself.

Harder. Not impossible. Harder. I don't want to make it sound like I am being disagreeable just for the sake of being disagreeable. If there is one thing the past decade has shown, it is that hacking is just a matter of time and whether a determined person is willing to direct resources at it.

I see you, and raise you:

What if they encrypted the parameters with a one-time pad ?

Hmm. Most likely ( if not only ) way to counter that would be social hacking ( because I assume the pad is generated automatically from some source ), which seems like the best way to obtain it. Then again.. I am not an expert in this field ( but we do sometimes use parameters from link for some projects ). Is it refreshed after some time passes ( if so, maybe there is an easier way to just observe what changes )?

One-time pads have to be stored server-side. If you're going to do that, you might as well just store the data itself under a GUID server-side so you avoid network transfers and then put the GUID in the URL.

You don't need to encode it to do that though. You can just sign the URL

Yeah, why not just base64 encode, shove that into a db with a shorter key? I'm pretty positive this is how tiktok works for sharing videos.

Probably don't want to pay for space for unauthed users.

I think the other params are custom too. There's probably other state not being stored in that URL. If users want to send charts to each other and have them show up the same, this actually seems like a reasonable way to do it.

Another reason to not base64 encode is that many URLs with base64 strings will break in iMessage do to strings randomly matching various keywords that iMessage looks for. I think ‘usd’ is one such substring to look out for.

base64 does not have the problem here, iMessage has. Frustrating.

Agreed, iMessage should have URL detection that supersedes any other string parsing.

The fault doesn't lie in iMessage, it lies in Apple in general. Google's AOSP keyboard will IMMEDIATELY detect that you are writing a link when you'll put http(s):// in front of whatever else you're currently writing and will just shut up and let you finish without interrupting every 2 seconds with autocorrection suggestions. This does not happen on Apple's i(Pad/O)S keyboard, where typing a link will usually leave the user with an excercise in frustration, trying to fight over the autocorrection, which, for the 17th time, has decided to convert your URL into "normal words". Not even changing the keyboard seems to fix it, as, from what I remember, Gboard still uses Apple's prediction engine...

Curious if English is your mother tongue, and if so, which locale?

As a native speaker of en-US, I would write this as "base64 does not have the problem here, iMessage does."

Right before I was forked my parent did “export LC_ALL=C” and here I am.

Originally en-AU. I was trying to match the "have" with a "has". My sentence has a truncated " the problem" in my head.

Your version reads more correct, thanks for pointing it out.

Surely that's something that Apple should fix, rather than everyone else working around it?

Yes, but even if Apple fixes it, you still have to deal with old devices. Better have something that works everywhere rather than "this website is optimized for IE/Chrome".

This isn't at all comparable. It's purely an Apple problem.

In this case, its actually required. The state is actually for the entire chart. It has lots of settings not just the time duration but type of chart, crosshair should be set or not, then there are several types of chart studies which could be applied on chart along with their own settings. And also the chart drawings with their own settings. This feature is there so that people can share their own customized chart in full with others, and not a generalized chart.

Right no need to put it all in the URL, it could be just a random key pointing to local storage for instance, if not some cloud database.

If it points to local storage then it's not shareable.

If it points to a cloud database then you need a cloud database to do what a url could do.

Good point

I used to obsess over URLs as well, until the day I realised that there's no web page with the URL gmail.com!

Pass that super duper long Yahoo URL to Tinyurl and shrink it down to https://tinyurl.com/bdfepwar

It seems like storing state in the URL could benefit from URL shortening techniques so long as the secret sauce logic is both client and server side.

This loses all the benefits of storing all state in the url, which is that it's inspectable by the client. Using a URL shortener still makes all the data stored somewhere in the server

The primary benefit of storing all the state in the URL is that the server can be stateless. Using a shortener just moves the state to a different server. A user being able to inspect the state is just a side benefit.

Okay that's not exactly what I had in mind. My suggestion was to implement the same hash both server and client side so that the query string is small and manageable. Don't worry about the state not being fully legible because you will implement the hash decode on the client too. The code will be there for you to peruse.

How would you decode the hash? Amy of infinite states could map to the hash

I've lost my mind. I literally just implemented a dictionary web API the other week and have completely forgotten how I did it. Just forget what I wrote.

In a similar vein, I've long been a fan of Kayak's URL structure.[1]

[1]: https://www.kayak.com/flights/SEA-IND/2023-01-10/2023-01-14?...

I feel like you're ignoring the context of the linked post while also trotting out the most obvious comparison of all times.

ah, azure does the same for generating the "share" URL for their cost dashboards.

it additionally gzip's the data to compress it futher, with.. some savings..

in the above example you've shared, gzip would've,


This is quickly becoming a standard in apps and it really shouldnt be handrolled since its such a common requirement and easy to get wrong (between serializing/deserializing/unsetting states). In Svelte it is now as easy as using a store: https://github.com/paoloricciuti/sveltekit-search-params

in general i've been forming a thesis [0] that there is a webapp hierarchy of state that goes something like:

1. component state (ephemeral, lasts component lifecycle)

2. temp app state (in-memory, lasts the session)

3. durable app state (persistent, whether localstorage or indexeddb or whatever)

4. sharable app state (url)

5. individual user data (private)

6. team user data (shared)

7. global data (public)

and CRUD for all these states should be as easy to step down/up as much as possible with minimal api changes as possible (probably a hard boundary between 4/5 for authz). this makes feature development go a lot faster as you lower the cost of figuring out the right level of abstraction for a feature

0: relatedly, see The 5 Types of React Application State, another post that did well on HN https://twitter.com/swyx/status/1351028248759726088

btw my personal site makes fun use of it -> any 404 page takes the 404'ed slug and offers it as a URL param that fills out the search bar for you to find the link that was broken, see https://www.swyx.io/learn%20in%20public

Note on 2: I haven't seen this get a lot of use (or maybe I just haven't noticed it), but the Storage API also provides a `sessionStorage` object [1] with some interesting properties. Not only are values persisted across same-tab refreshes, but if you duplicate the tab, the session values are copied over as well (but they don't remain linked like `localStorage`). An interesting use case for this, IMO, is for tracking dynamic layout values (like if you have resizable panels, scroll areas, etc). If you keep this kind of thing in sessionStorage, you can refresh or duplicate the working tab and that ephemeral state will carry over, but since it's copied (not shared) the tabs won't step on each other.

1: https://developer.mozilla.org/en-US/docs/Web/API/Window/sess...

Unfortunately sessionStorage is not copied over when you ctrl+click a link, and that’s arguably the most common way to “duplicate” a tab. Something about security iirc but i dont grok it.

Still an underappreciated browser feature though, esp for class server-side rendered apps.

I'm actually kinda surprised that's the case for links to the same origin. I suppose you could probably still get this working by using `window.opener` along with `postMessage` to request data from the opener, although you'd have to be very conscientious about checking the origin and verifying that the requested data is actually ok to share.

Yup. Session storage is particularly useful for remembering the initial URL of the user when they leave your app for SSO.

i was thinking 2 is more like Redux, btw :)

ive never honestly used sessionStorage as it's not intuitive to me (if i want it clientside i should probably track it serverside, and at that point server is my source of truth)

Could we please stop this "shouldn't be handrolled" or "don't write your own X". It only results in pushing some false idea that you have to be special to do these things, or that innovation is impossible. I can't believe we went from "don't roll your own crypto" to a plain thing like "don't roll your own url state management". What? That is obviously exactly what you should do to get a deeper understanding of the technologies you're working with. I think the only people I hear say this is because they have some agenda in pushing developers to be "tool users" instead of "problem solvers".

It's not that you shouldn't say "there be dragons" or suggest existing solutions, but that's doable without saying you shouldn't do X at all...

Does anyone know of solutions for this in React?

Tanstack is working on a router that purports to solve a lot of this.


So you think of a new feature "X". That feature has a bunch of state it needs to manage. Each piece of state has _requirements_ over where it fits within this hierarchy based on how this state relates to the feature being developed. I don't understand where the cost lowering is? I know instantly where the state should live when I design the feature.

requirements evolve over time. and they tend to go up in terms of persistence/auth/sharability needs. it would be nice to make that easy; and conversely when they arent needed anymore it'd be nice to take the state persistence level down with just a few character changes rather than switching state systems entirely

Agreed. Thats a nice conceptual layering as well. Thanks for sharing.

This makes sense but also seems very annoying to do? E.g let's say you are storing state in the URL, and now the user wants to save it to their profile. You need to upgrade the state from "encoded into the URL" to "stored in my database", which seems annoying.

This is pretty common and has a bunch of advantages, like the fact you can link to and bookmark a particular state.

Also, if you are careful you get undo and redo for free with the browser's back button doing all the work for you.

The disadvantages are that your representation of internal state becomes part of the interface - if you ever change your app you need to deal with versioning the state so your new version can transparently handle the old state format.

If your app has a server component that acts on this state, be super careful about acting on it and treat it as you would any other input under user control.

If you app is completely client side, consider storing the state in the #fragment section of the URL. This never gets sent to the server. An example from my own site [0] - see how the fragment part of the URL changes as you select different topics.

There are also limits on just how much you can cram into a URL but with care you can shove a lot of state.

[0] https://sheep.horse/tagcloud.html#computing

> If your app has a server component that acts on this state, be super careful about acting on it and treat it as you would any other input under user control.

I would recommend signing it if it's generated by the server component, and checking the signature when the server component is provided this signed state.

For example to do this in Node is quite straightforward.

Key generation:

    const crypto = require('crypto');
    crypto.generateKeyPair('ed25519', (e, pubkey, privkey) => {
        // save pubkey and privkey somewhere
        // ...

    const data = Buffer.from(JSON.stringify(state));
    const signature = crypto.sign(null, data, privkey);
    const signeddata = `${data.toString('base64')}.${signature.toString('base64')}`.replace(/=/g,'');

    const parts = signeddata.split('.');
    const data = Buffer.from(parts[0], 'base64');
    const signature = Buffer.from(parts[1], 'base64');
    if (crypto.verify(null, data, pubkey, signature)) {
        // signature not verified, throw or return
        // ...
    const state = JSON.parse(data);
As the above uses Ed25519 the signatures are quite small too. It needs a bit more error checking, and might need extras like expiry time and such, but should be roughly sufficient.

Excellent idea for those apps that need that type of protection. For many apps however ability to manually edit the link to get to the different state is actually a free bonus feature. I've built a CRM/Cases management app for the client that stores data filters in the url, and while there's a nice UI to control filters, we observed that lots of users simply go and edit the url directly to quickly get what they need.

Good advice. For some apps you might also need to protect against replay attacks to prevent the user from reverting to a previous state in a way that you app should not allow (undo'ing a change of bank balance, etc).

But if your state is so important it is probably better to not uses these techniques at all and just store the state on the server.

This technique is client-side data only, so updating a client-side state would only appear to revert your bank balance on the UI, but wouldn’t trigger any server-side functions to do so.

If you sent the client state to the server for some type of CRUD action or persistence, you’d need to sanitize and validate it first. And at that point like you said, why not just keep the state secure server-side and not trust the client.

I did this for passing parameter to a twitter card generator php script :joy:

> This is pretty common and has a bunch of advantages, like the fact you can link to and bookmark a particular state.

Conversely, you can't link to the newest version.

IMHO you should never store the data itself in the url, only the params that you use to fetch the data. So if the page is the list of the latest 10 posts in a category, you'll store the category in the url and the sorting criteria, not the IDs of the posts themselves. On the refresh you always get the fresh data.

If the state cookie is generated by the server, then the best thing to do is to encrypt all application state cookies, and include an expiration time in them.

Also, consider whether you want to allow the user to share the URI or not. If not, then you need to always check that the user is authenticated as whatever you wrote in the state cookie.

> The disadvantages are that your representation of internal state becomes part of the interface

This is the biggest reason to avoid this. URLs aren't meant to be used this way. I'd only do it if I wanted a quick and dirty solution.

> URLs aren't meant to be used this way

I disagree, URLs are supposed to point to a resource - in this case the resource is the client state of the app (aka "deep linking"). It is a useful approach.

Of course, it can easily be misused.

The suggestion was to put your entire client state into it, which is probably more than just the resource location (aka deep link).

There is no such thing as "client state" separate from the notion of resource under REST. The URL is meaningful only to a server. If a server can encode data in the URL so it doesn't need to persist it, that's perfectly fine, so long as the URL conforms to all expectations of resources, ie. caching directives, lifetime, etc.

Modern web apps often use the URL in ways that only matter to the client and can be edited client-side without issuing another request to the server, unless you reload manually, in which case everything is reset except the URL state. This doesn't violate REST principles. Some of that makes sense to put in the URL so users can return to past state, some is very temporary and wouldn't make sense at all to persist across a reload/back.

For example, you may track the scroll position for the purpose of loading more content when the user reaches the bottom of a feed, but you don't want to persist that across reloads. And the user may click on some content and anchor the view to it, which you do persist in the URL for the purpose of history or sharing.

Sure, and all of that belongs in the fragment of the URL which is not sent to the server but is specifically reserved for client-side resources.

That’s like one of the biggest webapp debate according to pg

>Also, if you are careful you get undo and redo for free with the browser's back button doing all the work for you.

Ugh! Please don't do this!

A better idea is to use the URI fragment (the string after the # sign). It has the advantage of not having a size limit, and not needing to be sent all to the backend during the request. `window.location.hash` is your friend.

Everything old is new again!

A decade-plus ago, this was common in single-page apps. It was the only way to do it before the history API existed, and enough people were doing it that Google was pushing a fragment-url thing [0] that allowed server-side rendering. You'd make such links in a certain way, then the crawler would hope a simple modification of the URL could be rendered by the server and load that.

(Per a comment on that source, it's not been a thing since 2015, since the google crawler can now handle javascript)

[0] https://stackoverflow.com/questions/6001433/does-google-igno...

Mega.co.nz uses that to hide the decryption key for almost decade. You share links but Mega never knows your key because fragment hash is never sent to server or in referrer. Encrypted files are stored on Mega servers but the decryption occurs in JS on the client side, files are decrypted into temporary JS Blob objects in browser then you save them aka "download" locally... a lot of clever tech there. https://help.mega.io/security/data-protection/end-to-end-enc...

How to use this in React?

I don't think React has anything to do with it. Its just structuring URLs in the browser an appropriate way.

You'd need to read state off the url fragment (location.hash). You can feed that into your appropriate state management system, I suppose. Some (like react router) I think support hash routing, but I don't know that they support both has routing and history routing at the same time, so that may not be quite up to task if thats the case.

This is a great point, but also depends on what browsers you want to support. In my experience some browsers a) will truncate the url when you try to copy it from the browser bar (safari), and b) will not support any url including the hash past a certain cut off (like the SO comment states).

I just updated knotend to use the hashmark, see my comment above!

> It has the advantage of not having a size limit

Still though I suppose you could probably gzip it before base64 encoding it for some additional optimization.

Extremely long URLs have other UX issues, e.g. sending them on chat apps and having them eat up multiple scroll pages in one URL, like:

"Check out this event:

http://..... ... ... (500 scroll-screen-lengths of just URL) ... ...

Here's another event on the same day:

http://..... ... ... (500 scroll-screen-lengths of just URL) ... ...

Can you let me know what you think and which one you'd like to go to?"

The article mentions that compression is already being used.

I don't most chat apps truncate long URLs?

https://app.diagrams.net/ has an excellent example of this. File > Export As > URL...

I just updated knotend so that it now uses the hashmark. I think this is a great idea, and it just loses the ability to get the urls server side, which in my case is totally fine. I also updated the blog post and acknowledged your comment, so thank you!

Sometimes you might want the state on the backend, e.g. to generate open-graph metadata tags.

well you don't have to store ALL of your data this way. Also you can always just choose to send it to the backend still with a POST

Be careful with this. This is only supposed to be used on the client side. Many HTTP server implementations (and infrastructure you run on) will not let you use the URI fragment even if it does hit the wire, or things like caching will break - you almost certainly want use query parameters for anything backend related

URI fragments being client side are entirely their point. Clients should never send them to the server.

Please don't. The web lives and dies by links, the more specific the better, so #to-an-id or #:~:text=to-search are so much more useful than bad state management that probably only works for that one logged-in user anyway. Put that state in the part of the URL you unambiguously control (query params).


That's really interesting. Do you have any idea how much data (i.e. max amount of json) you can store using something like that?

It seems to vary greatly by browser. Based on this SO answer [1] MS edge only supported 2000 characters in 2017, while Chrome currently handles 40 million characters (of compressed, encoded json).

[1] https://stackoverflow.com/a/44532746

Weirdly, with IEs it was exactly 2083 characters (1) not some base 2 number and MS never increased this number over all these years. This upper limit even included fragments. We tried to do something similar as described in the article and were surprised to learn about IEs limitation. In the end, we stored states on a JS object instead using their hash sum as keys and put that inside the fragment. Then fragment based navigation worked like charm across browsers.

(1) https://support.microsoft.com/en-us/topic/maximum-url-length...

I don't quite understand and it's on me, trust me :)

My reading is you were worried about length of encoding one state, so you moved to encapsulating states in a dictionary with keys of hash of State and objects of State

And this led to a decrease in size of the URL?

My guess at my misunderstanding: you kept the state dictionary server-side or at least some other storage method than the URL, and just used the URL to identify which state to use from the dictionary. I e. The bit you add to the URL is just the hash of the state, or dictionary key

Yes, in the final solution we just stored a hash sum inside the URL fragment (I think it was an MD5 sum) and the actual state inside a JS object in main memory. With a page reload you lost all states which was fine for us but you could use session storage to avoid that.

> were surprised to learn about IEs limitation

IE was all about limitations Nothing surprised me about IE limitations.

If you have so many parameters that you need a URL that's over 2048 characters, you should probably be reminded that localStorage exists.

Using local storage means that you can't share a flowchart with someone else (or embed one in another page) at all, let alone easily.

If you need to show a flowchart that doesn't fit in a URL, the idea that you should be able to do this with a URL is kinda bonkers. The solution there would be www.example.com/?data=url_for_the_flowchart_definition, because what you're talking about isn't page state, it's a secondary resource.

We wrote a library for storing application state in the URL. Its novel feature is using fragment query, which prevents the information from being sent to the server.


I would love to see it get more use.

Here's a small demo: https://cyphrme.github.io/URLFormJS/#?first_name=Bob&last_na...

See my other comment on this page for some other examples of its use.

This looks very cool, and I'll definitely use it in a side project sometime soon!

Does this necessarily prevent the ability to provide in-page anchors?

The fragment query delimiter is `?` with `&` for between values, just like normal queries. It practice, browsers are not fragment delimiter aware so at the moment I would treat them as mutually exclusive.

There's a larger issue of URL fragment delimiters. For a practical work around on Chrome only, fragment directive/text fragment can be used as a delimiter. (In my opinion, Chrome's delimiter should be re-worked with a new, more comprehensive delimiter.) Example of the workaround: https://en.wikipedia.org/wiki/URI_fragment#References:~:text...

Since you're using JavaScript, you can implement anchors in the way you want.

I did this for an Othello game back in the 90's. The state was encoded in the URL along with the players move. It seemed weird to encode a slightly different URL for every move the player might click, but that also meant squares that were not legal moves didn't get a URL. And that meant the mouse pointer would change when you hovered over a valid square! Another trick was to return the updated board state, but also include an automatic reload after a few seconds with the computers next move already determined. This made it look like "thinking" was going on when in fact it was already decided when the board updated on players move ;-) This was all in a CGI script. As others have pointed out, it also supported "undo" via the back button. Fun.

Last November I had covid and during that time I wrote a little pastebin that runs in the browser, compresses the paste data with Brotli, and puts it in the URL. It has line numbers, file naming, and syntax highlighting so it's feature complete for my own use. Here's a demo, but beware, the URLs are loooong: https://nicd.gitlab.io/t/#NhfW?Qm3F?6-&22c&CQZXuP+Aej5OXzXk7...

I find it interesting to sometimes play around with the pasted data just to see how it compresses. Sometimes adding a character or two may cut several characters from the URL length!

Compression is one of the closest things to real life magic I’m aware of. Especially now that people have basically compressed the entire internet into a couple gig AI models.

And then when I think about biology/genetics and what kind of unfathomably weird and convoluted compression is going on there, and then think about whatever it is our brain is doing… and then when I think about how like all of our perception is probably some kind of weird inescapable compressed representation of true reality…

It’s all magic. Real life magic.

> The maximum URL size is 2048

FYI, this is almost entirely untrue. Maybe for some old versions of IE, but modern browsers can handle extremely long URIs

Yep, I rechecked it and seems Chrome/Fx/Safari all support at least 32k. I replied in another comment I'm considering a checkbox to allow even longer URLs, though, I don't know who would want to paste them for others to read. :D

Is the Brotli compression done in the browser? If so, can you give some insight how this is done?

Yep, it is! You can find the sources on GitLab[0]. The compression is done using brotli-wasm[1].

The entry point to the compression is the syncCompress[2] function, which converts the data to UTF-8, adds the needed header bytes, and then compresses using brotli-wasm. Decompression is done in a streaming way in streamDecompress[3]. This is to avoid zip bomb attacks, where even a short URL could decompress to gigabytes of data, locking the browser. Thankfully brotli-wasm had streaming decompression builtin, I just had to write the wrapper code to update the text content and the status bar in the UI.

You can find the brotli-wasm code I'm using in the vendor folder[4], there is a JS wrapper and then the WASM code (both are straight from brotli-wasm).

[0] https://gitlab.com/Nicd/t

[1] https://github.com/httptoolkit/brotli-wasm

[2] https://gitlab.com/Nicd/t/-/blob/05e587b0183ff80b1c6e050b5d3...

[3] https://gitlab.com/Nicd/t/-/blob/05e587b0183ff80b1c6e050b5d3...

[4] https://gitlab.com/Nicd/t/-/tree/05e587b0183ff80b1c6e050b5d3...

I love stuff like this. Around 2008 or so I was working on a Java game for the T-Mobile Sidekick (aka hiptop). The platform let you register url scheme handlers so I registered "drod" (the game was a from-scratch port of Deadly Rooms of Death, a classic shareware game). Since the game was a 16x16 grid-based game with 3 layers, it was super efficient at expressing levels, including monsters and locks and keys, with just a limited number of bits. On my website I had "DLC" in the form of drod://[base64 encoded level here]. It always filled me with joy that users already downloaded the levels when they navigated to the website. Clicking the link just opened the game with the data they already had on their device.

Isn't this also how old school games (I've seen it as late as the NES I think) did allow you to save progress, you'd get a short string as your "save" and could just enter it to restart the game in the same state later. I love that it offers essentially unlimited "saves"

The downside of a lot of those "password" save systems was that you lost your lives, and sometimes more. The password sometimes just wasn't long enough to store enough information. For example, Driver[0] (at least on the Game Boy) is an icon-based one with eight(?) symbols per field and four fields. That's only 12 bits.

[0]: https://en.wikipedia.org/wiki/Driver_(video_game)

Great YouTube channel about reverse engineering old game password systems: https://www.youtube.com/playlist?list=PLzLzYGEbdY5nEFQsxzFan...

And you can use your save on another system.

In React you normally create local ephemeral state like this:

    const [myState, setMyState] = useState(null);
I've created at least 3 libraries that follow that same pattern, but instead of ephemeral the state is saved somewhere:

    // ?firstname=
    const [firstName, setFirstName] = useQuery('firstname');

    // localStorage
    const [name, setName] = useStorage('name');

    // global state (that can be connected to other places)
    const [user, setUser] = useStore('user');

Made this thing in 2010. I can't remember if I saw the trick being used in another website before or not... https://mrdoob.com/projects/voxels/#A/bnciaTraheakhacTSiahfa...

Hey all, I made this post to show a technique that I'm using in my keyboard-centric flowchart editor [0]. I really love urls, and the power that they hold, and would love to see more apps implement little hacks like this.

Also, another shout out to mermaidjs and flowchart.fun for also implementing similar url-based sharing.

[0] https://www.knotend.com

There is a size limit though that quickly gets exhausted if you are storing text (2000 chars)

(maker of flowchart.fun here) This is technically true but it's a little more nuanced. The 2048 character limit comes from Internet Explorer. There's a great answer on SO https://stackoverflow.com/a/417184/903980

Still on FF I opted to use LZ Compression (pretty sure many other sites do this as well) to get the number of characters down

I have done this myself and various tools e.g. slack, Twitter will truncate long urls. Eventually it will break somewhere, not just in the browser. If your tool has a hard limit on information content it's ok, but that does not seem to be the case here.

Store the content in ipfs and just put the hash in the URL?

I haven't used https://github.com/ipfs/js-ipfs in this capacity but I'm under the impression that that's moving bits around like that is more or less its purpose.

Although I suppose this puts a burden on the URL-creator to pin the content until the URL-clicker doesn't need it anymore, which is not how URLs are supposed to work.

This is the right answer here. The solution works until it doesn’t. Browser URLs are limited in real life. So when the state grows it will break sooner or later. Of course it depends on the usecase.

You're right, its a limitation. Knotend also supports upload/download of a .knotend file to get around this.

If the schema is fixed, using proto buffers will greatly reduce the length required.

Love the app!


I had to add compression into Mermaid live editor's URL state as the length limit was exceeded with big diagrams.

Wrote this SerDe which can be extended with better algorithms in the future. https://github.com/mermaid-js/mermaid-live-editor/blob/devel...

Eg: https://mermaid.live/edit#pako:eNo9kUGTnCAQhf8K1adNlesoqKiHV...

I did something similar, but compressed with pako to make the URL as short as possible: https://www.pokerregion.com/?r=Y2RwYBBgYGFkcGAQYGBhZHBgEGBgY...

Here is the compression code: https://gist.github.com/kissgyorgy/39623a7d0ba2f6faafb464b72...

I used the properties of my data structure (multiple cards represented in a byte as bits) to represent the same data on less bytes (to avoid compressing JSON with unnecessary characters)

There is a max length to URLs after which it is all wild west based on the browser implementation.

See: https://stackoverflow.com/a/417184

Great for complex apps in the browser and sharing complex save states!

Bad for ordinary, mundane web applications and sharable URLs!

Like, great for a musical step sequencer in the browser, bad for a website or HTTP service

OTOH, app state on websites of commercial service prodivders might be best represented by a JSON blob anyway.

In something like a checkout process or form submit, of course this doesn't matter much.

But I prefer human-readable URLs wherever possible.

And I implemented a similar pattern once, too.

It's a very interesting approach in intself.

Relevant POJO state tends to be small enough but long lists or resourcrs in a URL are not supported by most servers and not advisable.

Just do not. That or alternatively a hidden field is how asp.net WebForms did it state management 20 years ago. Worst possible idea ever.

Good for small scale unimportant external state management but not for your everyday web app.

When I saw this idea all I could think of was the ASP.NET "postback" form data and URL data! I'm glad to not have to deal with that anymore.

It also degraded performance. I had to use a .NET-based issue tracking web app 20 years ago. The performance was horrible, with pages sometimes taking 20 seconds to load. I eventually viewed the source of the page and discovered a form field called "viewstate" that contained several megabytes of encoded data.

I tried to do that at first with https://textshader.com because the whole site is client-side and I didn't want to bother with a backend. It didn't really work because the state is unbounded in size, so instead I just chuck it on GitHub gists and then point to the gists. It means I don't have to store any data myself and there's zero recurring costs outside of the domain name... but if the site ever got popular I'd probably have to figure something else out.

I dream of a world where there’s some kind of ipfs like thing for storing state in some sort of distributed commons, all while maintaining user privacy/allowing for state to expire if untouched, and monetized in a balanced way to incentivize hosting without discouraging consumer adoption too much/trending towards gouging.

We’re so close/all the pieces needed seem to exist, but getting something like that off the ground is super difficult.

On a related note …

If you don’t want to store anything on the server, why serve the static JS and HTML? You can embed them in the URL also.

In that cass the URL just becomes a local file.

You should store the file locally, along with all the data it embeds.

Or you can store the files on a CDN at the edge. The problem with the latter is that people can abuse the storage.

Really, the Web needs to be overhauled to have a topology more like a DHT, where you only have peering agreements with a few peers, and they cache the content temporarily when it is requested a lot. So no centralized servers at all.

Vuejs Playground does this. This is the url for the default tamplate:


Worth to note that it actually should use the base62 https://en.wikipedia.org/wiki/Base62

It’s the same as base64 but w/o "+" and "?" symbols.

There is a URL safe version of base64 that uses - and _ instead. Base64 tools can be more readily available.

> Another great benefit is that these urls can be embedded. That means the user can put their graph on any ewb page that supports embedding. I see people typically do this with wikis like Notion, which means you can share with a team

This is so valuable that it can't be overstated. Being able to store the raw data in a link WITH the visual representation, you'll never struggle trying to find out who has the latest version.

Bingo. I use this all the time to let users pass preconfigurations of internal tools back and forth. These are tools that live on the client and don't have a backend. The downside is my users complain about 'long/ugly links' which is another problem...

This is exactly how Shel Kaphan (first eng @ Amazon) handled the state of a shopping cart in the mid 90s. Here is him discussing some of the pitfalls: https://lists.w3.org/Archives/Public/www-talk/1995NovDec/020...

About 6 years ago I had to do this on a project. We had charts with a lot of options and we wanted them to be bookmarkable and shared between colleagues. We started with base64, soon moved to a custom encoding with version support https://github.com/ananthakumaran/u

I learned about your library because the harmopark.app site uses and credits it. I plan to use it in a music app I'm developing.

I saw someone forked it and removed the lodash dependency / minor refactoring https://github.com/ananthakumaran/u/compare/master...j-te:u:...

I built a quick/wip payroll validator [0] that can read state (i.e. payroll details) from the URL.

The purpose in this case was to generate a QR code, the idea is this can be included in physical payslips. Then the numbers behind the calculations (tax deductions) can be validated, broken down, and understood. I'm developing more tools to these ends, via my overarching project calculang, a language for calculations [1].

I also have a loan/repayment validator [2] but haven't added this QR code feature yet.

Bank letters e.g. "Interest rates are rising and now we want 100e more per month, every month" could use a QR code to an independent validator or to see the workings behind the 100 calculation.

Not using this in the real world now and there are security considerations to keep in mind, but reading state from a URL facilitates the usecase: QR codes that link physical numbers to their calculation and model.

Implementation of payroll calculator is an Observable notebook and thankfully it neatly supports all my strict requirements as demo of this.

[0] https://observablehq.com/@declann/payroll-playground-ireland...

+Feature tweet: https://twitter.com/calculang/status/1608183731533107206

[1] https://github.com/calculang/calculang

[2] https://observablehq.com/@declann/loan-validator-dev

What is old is new again.

Back before websites were "Apps", you could often do exactly this (Including Authentication!) right from the url.

Before cookies, I remember in middle school (ebay?) using the URLs for sessions.

If they forgot to include the session token on any link on the page, you'd be logged out and have to authenticate again, or you could navigate backwards back to a sessioned address in your history. Once I figured that out, I copy and pasted my session id so if they omitted the session on a link I wouldn't have to log in again.

Am I alone in thinking it's ridiculous for an editor action to push/pop state in browser history to allow undo/redo? Surely this leads to substantial browser history pollution?

You are not wrong. I cannot think of many interactions with an editor where if the user presses the back button they want to undo the last micro action they did, instead of going to the previous page / major UI interaction. Even just trying the demo after adding 3 blocks I had to press back like 5 times to exit the page back to where I was before.

Absolutely agreed. Given how HN can be when complaining about back button breakage (so common that it had to be explicitly banned), I'm surprised so few people are talking about the implications here.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact