> In CS theory, regular languages are a strict subset of context-free languages, but regular expression implementations in mainstream programming languages are more powerful. As noulakaz.net/weblog/2007/03/18/… describes, so-called "regular expressions" can check for prime numbers in unary, which is certainly something that a regular expression from CS theory can't accomplish. – Adam Mihalcin
Commented Mar 19, 2012 at 23:50
There are a few open projects to support a lot of these thermal cams like the more popular Topdon or InfiRay models. PyThermalCamera [0] or Thermal-Camera-Redux [1] are good ones, I got my thermal cam working on my Linux laptop just fine.
Do you know of anything that can handle UNI-T cams, particularly something like UTi721M, which is a nice thermal cam that attaches to a smartphone via USB-C?
So this is great, if you're just looking to deduplicate read only files. Less so if you intend to write to them. Write to one and they're both updated.
Anyway. Offline/lazy dedup (not in the zfs dedup sense) is something that could be done in userspace, at the file level on any filesystem that supports reflinks. When a tool like rdfind finds a duplicate, instead of replacing with a hardlink, create a copy of the file with `copy_file_range(2)` and let the filesystem create a reflink to it. Now you've got space savings and they're two separate files so if one is written to the other remains the same.
How would this work if I have snapshots? Wouldn’t then the version of the file I just replaced still be in use there? But maybe I also need to store the copy again if I make another snapshot because the “original “ file isn’t part of the snapshot? So now I’m effectively storing more not less?
AFAIK, yes. Blocks are reference counted, so if the duplicate file is in a snapshot then the blocks would be referenced by the snapshot and hence not be eligible for deallocation. Only once the reference count falls to zero would the block be freed.
This is par for the course with ZFS though. If you delete a non-duplicated file you don't get the space back until any snapshots referencing the file are deleted.
Yes that snapshots incur a cost I know. But I’m wondering whether now the action of deduplicating actually created an extra copy instead of saving’one.
copy_file_range already works on zfs, but it doesn't guarantee anything interesting.
Basically all dupe tools that are modern use fideduprange, which is meant to tell the FS which things should be sharing data, and let it take care of the rest.
(BTRFS, bcachefs, etc support this ioctl, and zfs will soon too)
Unlike copy_file_range, it is meant for exactly this use case, and will tell you how many bytes were dedup'd, etc.
However, the fact that editing one copy edits all of them still makes this a non-solution for me at least. I'd also strongly prefer deduping at the block level vs file level.
At least for android apps, it's best to first rate 5 stars, then the lower score in Play store, as those pop-ups usually only redirect to play store for high ratings, to inflate their rank.
These are happening this time of the year where I live. I like to go out at sunset to watch them dance. It's amazing how they coordinate so well at such close quarters, looks like a single organism from afar.
If you ask them they will remove sites that you created. It's not under right to forget laws as they don't exist in the US. What I'd like to know is whether they also delete the data or just make it inaccessible.
ploetzblog was available and is now completly gone :( "lost" some recipes that he didn't migrate that I used to bake all the time. Used to look it up on the IA and was pissed when it was deleted
Anyone that has a real need for something like that is either using either very specialized (and expensive) hardware, or some peripheral solution like TFA. We already have general purpose, mass produced devices that benefit from huge scale, and there's no such scale in amateur radio.
In some cases, the model will be lighter. There is no need for 14M parameters for physics simulations, and there's a lot of promising work in that area.
> In CS theory, regular languages are a strict subset of context-free languages, but regular expression implementations in mainstream programming languages are more powerful. As noulakaz.net/weblog/2007/03/18/… describes, so-called "regular expressions" can check for prime numbers in unary, which is certainly something that a regular expression from CS theory can't accomplish. – Adam Mihalcin Commented Mar 19, 2012 at 23:50
reply