So even if you put it behind a promise, when that promise actually runs, it will block the thread.
But continuing with that same thought process, I very nearly went for an architecture that used a web-worker, did the `JSON.parse` in the worker, then exposed methods that could be called from the main thread to get small amounts of data out of the worker as needed. Something like `worker.getProperty('foo.bar.baz')` which would only take the parsing hit for very small subsets of the data at a time. But ultimately the oboe.js solution was simpler and faster at runtime.
If you have a large json object. You can use the fetch api to work with it. If you need to cache it, use the cache storage api. Unlike localStorage which will freeze the UI, cache storage wont.
It’s slightly slower since it needs to talk to another thread but who cares as long as the UI is responsive to do other things.
It looks like it doesn't, but the same exact symptoms will happen even while awaiting the fetch json().
I could see the value in this for sure. I currently have a problem of loading a ton of JS for some users who have thousands of objects embedded in the view with Rails using toJSON() in a <script>. It’s creating far too much weight on the frontend. I’ve been considering fetching it via a simple REST request instead.
I think of js entirely from a node.js perspective where I conceptualize it as an async task. Is this also wrong?
But both server-side and client-side JS use the same system, the event loop. It's basically a message-queue of events that get stacked up, and the JS engine will one at a time grab the oldest event in that queue and process it to completion. Anything "async" will just throw a new event into that queue of events to be processed. The secret sauce is that any IO is done "outside" the JS execution, so other events can be processed while the IO is waiting to complete.
Take a look at this link, or search up the JS event-loop if you want to get a better explanation. It's deceptively simple.
For example: https://github.com/nodejs/node/blob/master/src/node_crypto.c...
I generally use that as an example when explaining to people why Node isn't a great fit for a lot of workloads. They have to use these features internally, but you as the user with a CPU-intensive job don't have access to those features.
It's a really annoying problem, and I'm actually really happy to see that many others have the exact same thoughts I had at the time, and that I wasn't just missing something obvious!