I can also recommend Trilium Notes [1], which I have been happily using for years. It's currently in "maintenance mode", which I personally see as a feature (no risk of bloatware).
Self-hosted, great webapp, optional native clients and works offline.
If you're using print debugging in python try this instead:
> import IPython; IPython.embed()
That'll drop you into an interactive shell in whatever context you place the line (e.g. a nested loop inside a `with` inside a class inside a function etc).
You can print the value, change it, run whatever functions are visible there... And once you're done, the code will keep running with your changes (unless you `sys.exit()` manually)
If you are struggling to understand the README, I highly recommend the book Statistical Rethinking: A Bayesian Course with Examples in R and Stan by Richard McElreath. Although the examples are in R, the same concepts apply to Pyro (and NumPyro)
it's not that complicated that you need to read a whole book just to get a rough idea: it's just a cutesy way to specify a "plate model"[1] and then run inference using that model.
I was going to comment this. “What’s wrong with `cat`”, whose job is literally to concatenate files? Or even [uncompressed] `tar` archives, which are basically just a list of files with some headers?
I've been using QubesOS for years, and I highly recommend it. Not only for security (which of course), but also for the cleanliness of not polluting your computer with a myriad of dependencies for projects you just tried once.
And of course, the high-risk activities that we all have to do at some point (now at least their risk is limited to their virtual machine) :
- curl|bash or similar
- pip install, npm install etc
- run any random github project
- sudo install the drivers of my Brother printer
- install zoom
- plug random cheap USB devices to eg update their firmware
I think you don't understand. Qubes relies on software virtualization in conjunction with hardware assisted virtualization instruction sets. The aforementioned vulnerability existed in Qubes Xen.
I'm not an expert, but how could it affect the VT-d even in principle? AFAIK VM escape is impossible with software exploits in this case, only side-channel attacks are.
I think the two existing replies misunderstood your question, but it's a good one. Which is to say, I don't know the answer but I feel like I should!
I'll take a guess.
There are large gaps between good RSA keys. 100 may be a valid key and 138, but not anything in between. Or, well, they're valid but they're trivially broken by having a divisor other than 1, itself, and the huge prime factors (the example of 100 and 138 are not good keys for exactly that reason; finding secure keys is left as an...). That's why we need RSA keys that are more than 256 bits in length: the key space is sparse and an attacker can, with some amount of efficiency, skip over the gaps. (This is all from years-old memories of how RSA works, don't take this for absolute certainty.)
What I'm guessing the answer to your question is, is this: it must be inefficient to reconstruct the key from an indexed form (e.g.: the first good key (100) has index 1, the second good key (138) index 2, etc.) without spending computational power disproportionate to the amount of extra resources that storing/transmitting the full key takes.
Now that I read the other answers again, maybe that's what Dylan meant, but to me that answer seems wrong because the public key is argued to not be uniformly random and that's precisely what compression algorithms are able/made to deal with. Perhaps not as efficiently as indexing can, but still. You wouldn't need to apply it to the prime factors or private key (doing that would, as they say, leak information), just the public part which people were saying is not fully random.
It's really easy to generate a public RSA key with desired patterns in it. The 1992-era PGP did use the last few bytes of the public key as the identifier, but later versions moved to using a truncated hash (MD5, and later SHA). At some point someone generated colliding keys for all the keys in the public keyring and uploaded them all, which kinda drove the point home.
(I assume that it's harder to generate a public ECDSA key with a specific pattern, but elliptic curve stuff didn't become common until after hashes were used for key identifiers.)
Because of tampering. If an attacker can produce a pair where the public key's last 40 chars match the victim's public key last 40 chars they effectively have a public key to dish out via MITM.
How feasible it is to produce said pair is another story.
It doesn't sound too hard to generate an RSA "vanity key", with any value you want for some of the bytes.
You can't control _all_ of the bytes, because it still needs to have the right structure and for you to have the corresponding private key, but 40 bytes of your choosing seems completely doable.
And if you can do that, you can impersonate someone else whose pubkey has the same 40 bytes. With a hash, any bit difference in any part of the key should result in a completely different fingerprint (hash collisions being extremely hard to find).
Note that this is specific to RSA keys. In no scenario (that doesn't involve extraterrestrial resources or perhaps nuclear fusion) can you create an ECC key, let alone a hash fingerprint, with 40 bytes of vanity. Afaik ECC keys are considered to be half the strength of a symmetric key (pre-quantum), so that's 20 fully random bytes or 160 bits of entropy. The sun simply doesn't hit the earth with enough energy even if you'd capture 100% of that and starve all life for it to do a computation of that magnitude. (The boundary is around the standard 128-bit key size iirc; I keep forgetting if the sun's energy would be sufficient for like ~120 bits or more like ~140 bits... and that's assuming perfectly efficient computational machines.)
Since the person you're responding to didn't specify which public key type, and since it's not obvious that your mention of RSA is to the exclusion of other algorithms such as ECC, I felt like the comment is a bit misleading
Others have answered this question pretty well for large public keys, but I wonder about the same argument for shorter elliptic curve keys where the key length and hashed fingerprint length may be the same. For example, is comparing 8 bytes of a Curve25519 public key as good as comparing 8 bytes of a SHA256 hash of that key? My gut says no, since generating partial cryptographic hash collisions is completely random while a structured public key of any kind is presumably less random, but I'm not sure how much less random (and thus easier) it would be.
One possible public key is zeroes + public fingerprint. If I remember correctly diffie-helman is based on multiplications , so maybe finding the private key is now equivalent to finding the private key of a 40 chat public key, which may be doable.
I'm probably wrong on the details here , but there's probably some math tricks you could use to more easily find some private, public key pairs that end with the fingerprint.
Ash Framework is a declarative, resource-oriented application development framework for Elixir. A resource can model anything, like a database table, an external API, or even custom code. Ash provides a rich, and extensive set of tools for interacting with and building on top of these resources. By modeling your application as a set of resources, other tools know exactly how to use them, allowing extensions like AshGraphql and AshJsonApi to provide top tier APIs with minimal configuration. With filtering/sorting/pagination/calculations/aggregations, pub/sub, policy authorization, rich introspection, and much more built-in, and a comprehensive suite of tools to allow you to build your own extensions, the possibilities are endless.
For those familiar with Phoenix, you can think of Ash as a declarative application modeling layer designed to replace your Phoenix contexts.
Self-hosted, great webapp, optional native clients and works offline.
https://github.com/zadam/trilium