Hacker News new | past | comments | ask | show | jobs | submit | grantwu's comments login

The timeline says that the initial report was 6/16 and the initial patches were 7/8 and 7/18.

It's not clear to me what was exploitable when.


> In exchange for such favorable terms (i.e., small carrying cost, matures on death), the bank will receive a share of the collateral’s appreciation (essentially amounting to “stock appreciation rights"), and this obligation will be settled upon the borrower’s death.

It's a loan in name only.

Regarding Bezos's selling of stocks - perhaps he has offsetting capital gains. See https://old.reddit.com/r/BuyBorrowDieExplained/comments/1f26...


> Blocking within one, with those locks acquired, and thunking back down to userspace, would mean that 1. a CPU core, and 2. the filesystem itself, would both be tied up indefinitely until that callback thunk returns. If it ever does!

In this theoretical design, you could just block all other administrative modifications. I don't think you need to tie up an entire CPU core, and I'm fairly sure that these zfs operations don't block regular reads and writes.

I think you had it right in your initial comment. There's no good way express branching with an implementation which incrementally submits operations to be committed as a batch. You'd have to take an admin lock on an entire zpool.

EDIT: talked to a zfs dev, said this would take the txg sync lock.


While there are ways to deschedule both userspace and kernel threads, there is no mechanism to deschedule a userspace thread while it's executing in the middle of kernel mode because of a blocking syscall.

Think of it like trying to deschedule a userspace thread in the middle of it having jumped to kernelspace to handle an interrupt. It just wouldn't work; that's not a pre-emptible state, not one that can be cleanly represented during a context switch with a PUSHA, not one where pre-emption would leave the kernel in a known state, etc.

So the CPU core is tied up because the original thread can't be descheduled, and instead would still be "stuck" in the middle of the system call, doing a busy-wait on the result of the callback. To make the callback actually happen in this hypothetical design, the execution of the callback would need to be scheduled onto another CPU core, using some system-global callback-scheduler like Apple's libdispatch.

Note that this is also why, in Linux, processes stuck in the D state are unkillable. They're stuck "inside" a blocking system call, and so cannot be descheduled, even by the process manager trying to hard-kill them (which, in the end, requires the system call to at least return to the kernel so that the kernel resources involved can reach a known postcondition state.)

And this is why innovations like io_uring make so much sense in Linux — they allow a userspace process to 1. make a long-running blocking syscall, while also 2. spawning a worker subprocess to communicate asynchronously with the logic inside the running syscall, by queuing messages back and forth through the kernel rings. (Picture, say, sendfile(2) messaging your worker to let you observe the progress of the operation, and/or to signal it on a channel to gracefully cancel the operation-in-progress.)


I'm not following what you're saying. Why do we need a callback?

In this imaginary design, the syscalls you make would look something like:

- BeginChannelTx -> return ChannelTxID

- ReadZFSProperties(ChannelTxID, params) -> return data

- DestroySomeDatasets(ChannelTxID, params) -> ok

- CommitChannelTx(ChannelTxID)

Notably, DestroySomeDatasets doesn't actually do any work. It merely records which datasets you want to destroy. There are no callbacks as far as I can see: there's no kernel thread waiting on a user thread to do something. This way also lets you express branching.

The drawback of this approach is you need a lock on all mutating administrative commands when you call BeginChannelTX. I talked to a ZFS dev, and he said that with ZFS' design, that's actually the txg sync lock. This means that while reads will proceed, writes will only proceed for a short period of time, and nothing will make it to disk. The overhead of making all these syscalls was also judged to be problematic.


I was really really excited when I saw the title because I've been having a lot of difficulties with other Go SQL libraries, but the caveats section gives me pause.

Needing to use arrays for the IN use case (see https://github.com/kyleconroy/sqlc/issues/216) and the bulk insert case feel like large divergences from what "idiomatic SQL" looks like. It means that you have to adjust how you write your queries. And that can be intimidating for new developers.

The conditional insert case also just doesn't look particularly elegant and the SQL query is pretty large.

sqlc also just doesn't look like it could help with very dynamic queries I need to generate - I work on a team that owns a little domain-specific search engine. The conditional approach could in theory with here, but it's not good for the query planner: https://use-the-index-luke.com/sql/where-clause/obfuscation/...


Arrays are nicer for the IN case because Postgres does not understand an empty list, i.e “WHERE foo IN ()” will error. Using the “WHERE foo = ANY(array)” works as expected with empty arrays.


Works as expected? Wouldn't that WHERE clause filter out all of the rows? Is that frequently desired behavior?


I could imagine that you're building up the array in go code and want the empty set to be handled as expected.


Can someone explain why Blastdoor has been unsuccessful? Is it too hard a problem to restrict what iMessage can do?


Can you point to a source that defines Levenstein distance as only referring to bitstreams?

A translation of the original article [1] that introduced the concept notes in a footnote that "the definitions given below are also meaningful if the code is taken to mean an arbitrary set of words (possibly of different lengths) in some alphabet containing r letters (r >= 2)".

And if you wish to strictly stick to how it was originally defined, you'd need to only use strings of the same length.

More recent sources [2] say instead "over some alphabet", and even in the first footnote, describe results for "arbitrarily large alphabets"!

[1] https://nymity.ch/sybilhunting/pdf/Levenshtein1966a.pdf

[2] https://arxiv.org/pdf/1005.4033.pdf


And Unicode is the biggest alphabet haha.


Where? CTRL-F Randen doesn't show anything, and the Randen repo claims it's faster than ChaCha8.


It is not directly in the article, but in a link to a tweet by djb, the creator of ChaCha8. He believes that the cpb listed in the Randen comparison is off:

https://twitter.com/hashbreaker/status/1023965175219728386

He mentions that perhaps the implementation of ChaCha8 for the benchmark is done by hand and unoptimized. And it is true from what I saw that a lot of benchmarks with ChaCha8 are implemented with none of the tweaks that make it fast.

In this instance, it looks like the Randen author didn’t reimplement it from scratch, but they used an SSE implementation, not an AVX2 one, which would have been faster: https://github.com/google/randen/blob/1365a91bafc04ba491ce79...


DevOps engineer link appears to 404.


Oh I guess we don't need maintenance then


I believe you're putting words in my mouth.

A great dev can even be great within fixing problems/performing ops/maintenance, by my same qualification. _They ship fixes._ Perfection has very little to do with greatness. Aspiring for it might, but that's a second discussion about unrealistic goals and setting oneself up for disappointment.


> I believe you're putting words in my mouth.

You're the one who said you could describe "great" in two words that didn't include maintenance or quality. If it turns out those two words aren't enough, or need lots of asterisks and clarifying words, that's on you.


I don't get the feeling that you and the parent are taking my comment in good faith; clarification is of course necessary if the parent response is effectively a non-sequitur/strawman vs what the other respondents took my meaning to be; but I'd rather not point fingers here, as that's not useful to a productive discussion.

I chose not to use additional clarification because that unnecessarily constrains "shipping" to me. One can deliver value, and have a track record for delivering value, across a very wide set of variables, and I've found my heuristic to be far more elegant (if perhaps not precise enough to bear the rough seas of internet discourse) at mapping post-factum "Was this a successful business relationship" than a much more hair-splitting definition, as well as helping me keep personal biases out of my judgement re: someone else's success.


> I don't get the feeling that you and the parent are taking my comment in good faith.

I'm only playing by the rules you yourself set out. People in this thread were discussing how measuring "greatness" is subtle and very difficult, and then you came in asserting you could solve it with a snappy 2.5-word manta. If you're now claiming that additional clarification is needed, well, yeah, that's what everyone was saying to begin with.


The problem honestly seems like less of a debate about defining greatness and more about defining shipping, at this point.

Maybe this is nitpicking, but I've had this conversation in person more times than I can count during loops, review cycles, and over beers, and I'm hard pressed to think of the last time I got such pushback against something that seemed pretty cut-and-dry; namely "did you get done what you needed to get done without undue pain and suffering."

I'll openly concede that I very well could be in a "communication bubble" where words like shipping have loaded context. I'd still defend my point that if one chose the isomorphic terms for within their space, the "intent" of my message holds water as a useful heuristic, if a rather reductionist one. Tautologically, someone who I can trust to fulfill their role without "fires everywhere", two thumbs up in my book.

That being said, I'm honestly blown away by the amount of downvotes I've been getting for what I typically saw as the pillar of "meritocracy", that you get your job done without burning down the house. I wish more of the opposition at least take the time to express _why_ as opposed to just burying this. At this point I feel like I'd do better to "Save my account" and stop commenting but alas this is a topic close to my heart.


I read "pull backups" as "restore from backup" not "make backups".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: