Consider Just-in-Time compilation which is very much used in AOT languages [0]. That could be one avenue, as long as the compiled code works on each machine (common instruction set), or maybe something slightly higher-level like LLVM IR can be produced and transmitted and then compiled on the driver/executor.
Another approach could simply be to transmit the function source to the driver, compile it, then distribute it to the executors assuming they're the same architecture, or compiling it on executors if not.
I think that's just a matter of how many subdivisions/polygons they decide to devote for those assets. For example, that bucket is pretty inconsequential in the overall scene so they can save some polygons on it, to give more toward something like a character or weapon model that is more prominent, more of the time, since more polygons means higher storage and computational requirements.
Something like tessellation [1] can be used to dynamically tune how many subdivisions to give a model based on certain parameters, such as camera distance.
This one surprises me the most, not the fact that DownThemAll itself isn't WebExtensions compatible, since that was known since last year [0], but that there is a huge gap in download management functionality.
I'm not even asking for all of the functionality present in DownThemAll, I'd just like:
* to be able to queue up multiple downloads without actually starting them, so I can start/resume them once I go AFK for example
* auto-scan all links on the page and be able to filter them to add links to the download queue, e.g. to queue up all files matching a certain filter
* rearrange the queue's order
* persist the queue across browser sessions
* pause any given download and be able to resume it across browser sessions
Meanwhile the usual built-in download functionality nowadays appears to be pitifully bare-bones by comparison, only providing the ability to manually download individual files one by one with no semblance of a queue, nor a way to limit concurrent downloads to one (e.g. I'd rather have one finished and ready to use than 5 downloading slowly due to the connection being spread thin).
For what it's worth, thankfully the author of DownThemAll is working on a WebExtensions version [1], which may also get released on Chrome and other WebExtensions-compatible browsers [2].
I saw this a while back and it seemed amazing but I was wondering if there's any way to open the actual live documentation page. So for example if you choose the documentation for `Array.prototype.map` then it'll pull up that page on MDN, rather than the locally cached Dash page.
I know the whole point of Dash is to have locally cached, offline documentation copies, but I was thinking it would be amazing to use that to feed Helm candidates, but actually open the real, live documentation page.
> You would be crazy to write a UI-heavy app in rust. (note: I'm talking about high level DOM-like manipulation, not about rendering engines)
I think the jury is still out on that one. I agree with the current state of things but the potential to ease and facilitate this is tremendous through the use of syntax extensions (procedural macros) which can dramatically simplify that use case. In general I think procedural macros add a _lot_ of versatility/flexibility to the language. I anticipate that there will be a huge boom in that area once they stabilize, and it will catch many people by surprise.
An example of the versatility that they enable is the work-in-progress async/await [0], whereas in other languages they would usually have to be implemented in the language itself. Note that this does not preclude their implementation in the language itself, but since it's a work-in-progress they're able to experiment with them without having to implement them in the language from the beginning.
I have to agree, things could change in the future. However, my guts tell me rust GUI libraries/frameworks won't be as great as say, Cocoa, although that's something I would love.
I suspect this is mainly for Amazon-bought books with highlights stored on Amazon's servers right? Or does this also work with notes made on side-loaded books?
That is indeed correct. We currently only support importing your highlights synced with Amazon's cloud.
It's totally possible to also allow you sync with the notes from books on your device, but the UX is a bit worse -- you've got to physically plugin your device. That being said, it's on our todo list!