That's exciting about Firefox, though I really wish that firefox would support OPFS in such a way as to allow selecting the directory where the files are located, beyond just persistence, there are some use cases where that is required (vscode.dev for example).
Unfortunately, the whole point of OPFS is to not do that. Mozilla and Apple thought it'd be bad for security to let you read/write arbitrary files/folders on the real file system, even with a permissions dialog, so a fake filesystem only viewable by an individual website was the compromise solution that everyone agreed to implement.
In the docs for Node.remove_child(Node node)[1] it says
"Removes a child node. The node is NOT deleted and must be deleted manually."
Do you still have to queue_free the top node that was removed from the tree, or else how is it being deleted. Or is there a different method to call for removing it from the tree.
BTW: I really like the `_init` / `_enter_tree` / `_ready` / `_exit_tree` lifecycle as was described.
Sounds like Godot 4 is replacing GDNative with GDExtension. Just wondering if you have tried it out and have any thoughts. Is it a smoother experience / about the same etc.
I have only briefly looked at it. All in all it looks very similar to GDNative, albeit a little cleaner. It will be nice to finally be able to expand functionality without recompiling the engine.
Right now I rely on nim bindings for GDNative so it might be awhile before I make the change over, unless I decide to port my code to C++ or update the bindings myself.
For Lesson 1: I think the general pattern that ought to be followed is to "prefer undo to warnings." Undo is often harder to implement, however it's usually a superior experience.
I agree that undos are pretty nice for when errors are possible, but I think there's a really good reason to put up warnings too. Let's say they accidentally made the repo private, but didn't notice it (unlike what happened here). Although you would be able to undo the change, it may have only kicked in once the news spread around (followed by reputation damage, etc). A well worded warning would prevent that.
In a way, I think warnings and undo serve two different functions-warnings are meant to inform the user of their action, and undos are meant to roll back actions taken.
There are definitely still cases where warnings are important, but the point of "prefer undo to warnings" is to eliminate as many warnings as possible to avoid desensitizing users. I should only see a scary pop-up if the action that I'm performing is going to be well and truly destructive.
Undo-instead-of-warning is a great pattern for lower-stakes actions that are easy to undo and cause minimal damage if left done. This allows the few warnings you do show to be recognized as truly important.
> Even better, flipping "public/private" has absolutely no need to delete any information at all.
Of course it does. You can’t watch/star a private repo unless you have access, so all watchers/starrers should be removed. If you kept the data as-is watchers would still see the repo updates in their feed, which is the opposite of the 'private' feature promise.
Sure, if you don't change any part of the system it has to be that way. But there's no reason a private switch can't just suspend stars/watchers instead of permanently deleting them.
Hi Luca - congratulations! I have a quick question, have their been any proposals to add Subresource Integrity hashes (https://developer.mozilla.org/en-US/docs/Web/Security/Subres...) to the import syntax? I think this effects Deno more acutely than other projects since Deno supports / (encourages?) directly importing from a url with a precise version number encoded in the url. It would be nice to add another layer of safety on top and be able to assert that the module received is exactly as expected. Thanks!
You can do that right now, albeit not directly in the import: it's done via an explicit `lock.json` file (https://deno.land/manual@v1.16.4/linking_to_external_code/in...). I'm tempted to agree that having some ability either to directly import, or even just to have that better integrated (right now, you have to ask for the lockfile to be used and pass an explicit path), would probably be a good idea.
Ok that makes a lot of sense, the link you shared helped explain things in denoland quite well (and reminds me that I really need to give it another go).
From the link I see this example:
in src/deps.ts
// Add a new dependency to "src/deps.ts", used somewhere else.
export { xyz } from "https://unpkg.com/xyz-lib@v0.9.0/lib.ts";
Then essentially a create/update lock-file command is run.
Then the lock file is checked into version control.
Then another developer checks it out and runs a cache reload command.
As you mentioned in practice it's definitely a bit too manual, but should be one of those things that can be automated so it's not the end of the world.
Having said I think having it in the import syntax would provide a few benefits:
1. No extra steps need to be run & hopefully IDEs could auto-complete the hash.
2. Would hopefully be standardized with the browser allowing for native browser support as well (or perhaps lock.json could be standardized with something like import maps)
3. Having it right there provides an extra level of assurance that the integrity hash is going to be used (especially in files intended to be used in the browser and in deno ... not sure how common that is though).
I am not aware of any specific proposals right now. There was some talk a while back about supporting SRI hashes inside of an import map, but that sorta dissolved. For Deno at least you can use a `lock.json` file with the `--lock` and `--lock-write` flags: https://deno.land/manual/linking_to_external_code/integrity_...
I don't think it's clear cut that it should be out of band 100% of the time. I think there are use cases where inline is useful.
Cycles are definitely an issue, I am not sure there is even way to work around that, except to pull the cycles apart (which may not always be possible but is usually not a bad programming practice when it is). However at the library level, libraries tend not to circularly import each other. If it's being done at the inside a project level the build tool would be generating it so dealing with the module graph being invalidated may not be a hassle (or even necessarily a bad thing), in that case it could modify the files or it could be generating a lock file / import map (which I agree has benefits at that level of not forcing every source to be transpiled, but some of that probably still has to happen for module reloading e.g. appended search parameters to the module path for cache invalidation / module reloading during development like vite.js does for example, and realistically given the nature of the ecosystem some transpilation is going to have to happen either because of .ts or just because of browser differences).
For a top-level deps.ts / dep.js file pattern there probably won't be any cycles. That pattern is to declare a root deps.js file for your project that locks things down and re-exports from third party libraries a use the exports from that as the basis for other imports. For this pattern I think SRI would be extremely helpful and add enough benefit to justify it (even though SRI may not be used in the cases you listed).
Also for smaller projects or main modules having the SRI hash inline is really helpful.
It was used as a source of randomness. Someone blindly fixing a "bug" as reported by a linter famously resulted in a major vulnerability in Debian: https://www.debian.org/security/2008/dsa-1571
If they had simply removed the offending line (or, indeed, set a preprocessor flag that was provided explicitly for that purpose) it would have been fine. The problem was that they also removed a similar looking line that was the path providing actual randomness.
It could be possible that [some, (0,100]%, portion of the benefit provided by] the vaccines wear off after a certain amount of time. This makes statistics like 99.7% of Waterford have been vaccinated perhaps less insightful than something like 34% (or whatever the actual number is) have been vaccinated in the last 6 months.
edit: added [some portion, (0,100]%, of the benefit provided by]
The vaccine wears off in a few days. The vaccine tells the body to make copies of the “spike protein”. After a few days, the body breaks down the vaccine. In the meantime, the body makes copies of the spike protein. The immune system sees the spike protein and starts making antibodies to fight it off.
Those antibodies do not last forever. It is my understanding that the lifespan of antibodies is largely determined by the type of virus they are for. For example, luckily the antibodies for measles and polio last decades. On the other hand, antibodies for rhinovirus and coronavirus last months not years, which is part of the reason why you can have more than one cold in your life.
My understanding and it could be wrong is that they can't lower their rates, (and might actually have to increase to compensate for more vacancy) because it will affect their ability to finance the buildings if they have lower rates from whatever math is used to calculate the soundness of the loans. That could be causing the stickiness on price you are seeing (if it's not apocryphal).
My understanding is your understanding is correct. I've also been told that's why NYC tends to be quite unique, the buildings are mostly owned so there is more leeway in finding something as you can go directly to the owner. That said, a startup I consult to is in the basement of a building in Vancouver, I talked to their landlord who owns the building free and clear for 3 generations, same deal, no reason to rent it, just let it sit till the market rebounds. Curious to see what will happen.