There's probably some enterprise level deals going on there (as with every service provider), but they will still be paying them A Lot of Money every year.
Not much, but they'll benefit much more in the short term in reduced taxes by writing down those assets to zero.
Edit: this was downvoted, and I don't understand why. Am I wrong in thinking this action was made in pursuit of a write down? FWIW, this wasn't a thoughtless comment by a random Internet passerby; I hold 41,905 shares of PARA.
Only if you're not encrypting many billions of small messages with the same key, which is a possibility. It's just barely large enough for many uses, and "just barely" makes cryptographers nervous.
No. Extended-nonce constructions solve that problem by using the "large" nonce along with the original key to derive a new key. You then have the "small" nonce space plus the key space worth of random bits.
>What happens if you repeat the nonce? You’re going to mess up authenticity for all future messages, and you’re going to mess up privacy for the messages that use the repeated nonce.
The loss of privacy on OCB nonce reuse is not as severe. It would be more or less the same as with ECB mode.
> It is the user’s obligation to ensure that nonces don’t repeat within a session. In settings where this is infeasible, OCB should not be used.
But earlier in that section we have:
> […] The nonce doesn’t have to be random or secret or unpredictable. It does have to be something new with each message you encrypt. A counter value will work for a nonce, and that is what is recommended. […]
So given that GCM uses a counter ("C"), and a counter is recommended for OCB, wouldn't it be simple enough to get the equivalent (?) security more efficiently?
The notion of a nonce here is the same as that in GCM. GCM nonces aren't secret and don't need to be unpredictable; in fact, because the nonce space is so small, a common engineering recommendation is to use a durable counter.
Given that OCB (appears to be?) is more computationally efficient than GCM, is there any reason why OCB shouldn't be favoured nowadays given there are no IP issues?
I like OCB and dislike GCM, but GCM is very, very fast and is the de facto standard AEAD, and the runner-up is Chapoly. OCB would be a quirky choice, and maybe trickier to get in every ecosystem you develop in (I ended up writing my own back in the early days of Golang).
OCB is superior to AES-GCM-SIV in every way other than nonce reuse. OCB is faster than generic GCM for any combination of hardware acceleration. OCB is also significantly better than generic GCM for nonce reuse.
GCM-SIV is not perfect for nonce reuse anyway. It reveals to the attacker that two messages are identical.
My proposal was that parts of incomplete uploads would stick around for only 24 hours after the most recent activity on the upload, and you wouldn't be charged for storage during that time. ahenry@ vetoed that.
No, zinccat, you are wrong. Your model is vastly inferior to GPT4.
I tried multiple prompts. In all cases, your model performed far worse. Here is one:
> My bank says they'll pay 5.5% APY. How much will I have after 6 months if I deposit $100?
< If your bank is offering a 5.5% APY (Annual Percentage Yield), this means that the interest is calculated annually. However, you want to know how much you'll have after 6 months, which is half a year.
...
100ata5.5102.75 after 6 months.
User: Which is heavier, two pounds of feathers or a pound of bricks?
Gemini: Neither! They both weigh the same, which is two pounds. The key is in the wording of the question. You're comparing equal amounts of two different materials (two pounds of feathers vs. one pound of bricks). As long as the quantity is the same, the weight (force exerted by gravity) will always be the same regardless of the material.
This is a classic trick question that plays on the different densities of feathers and bricks. While bricks are much denser than feathers, meaning they pack more mass into a smaller space, the question specifies the weight by quantity, not volume. So, both two pounds of feathers and one pound of bricks experience the same gravitational pull and therefore weigh the same.
Interesting. Based on this conversation[1], I think Gemini Ultra is massively overfit. Make it do unit conversions or use units it hasn't seen in the same framing before and it does well. But stay close enough to the original trick question (1 and 1) and it fails.
Both got4 and Gemini answered this variation correctly: one pound of potatoes vs. one pound in paper British currency: which of these is heavier?
However gpt4 does better with the more ambiguous version pointing out the ambiguity: one pound of potatoes vs. one pound in paper currency: which of these is heavier?
I saw std::function and std::string (e.g. TotW 127, https://abseil.io/tips/117) being passed by value a lot in newer google3 code. Both are larger than 16 bytes.
This is done so you can use std::move to take ownership of the allocated memory in these objects rather than do a new allocation. Passing by value rather than rvalue reference let's your function be more flexible at the call site. You can pass an rvalue/move or just make a copy at the call site which means the caller (who actually knows if a copy or a move is more appropriate) gets to control how the memory gets allocated.
An unnecessary memory allocation is much more of a performance hit than suboptimal calling convention.
The vanilla interface often allows what's right depending on the context (e.g., if an unnamed return value is passed in, it can be move constructed, otherwise it can be copied). I saw some code bases provide an override for a const ref and a normal type.
Wouldn't this be very annoying to work with, because now you have to explicitly move or copy the string whenever you want to construct one of these objects?
It's kinda what Rust forces you to do, except that std::move is implied. Anything taken by value is equivalent to taking by && unless the type is explicitly marked as Copy (i.e. it can be trivially copied and the copies are implicit).
But yeah, in a c++ codebase, good modern practices are often verbose and clunky.
Passing std::function by value is almost definitely wrong these days with absl::AnyInvocable (if you need to store the type) and absl::FunctionRef (if you don't). Rough analogues in the standard are std::move_only_function (C++23) and std::function_ref (C++26).
std::string in the case you cited is only really relevant if you std::move() into it and you would otherwise incur a string copy. Yes, it's bigger than 16 bytes (24 bytes), but that pales in comparison to the alternative.
(Taking std::string&& would eliminate the possibility of misuse / accidental copies, but that pattern is generally discouraged by Google's style guide for various reasons.)
Also, just because you see a certain pattern an awful lot even at Google doesn't mean that it's best practice -- there are plenty of instances of protobufs being passed by value...
Please prove me wrong on these points. My current belief and understanding is that:
1. `foo(T)` indicates polymorphic over both `T::T(const T&)` and `T::T(T&&)`. This gives you benefits of both pass-by-move (using `std::move` as needed), or copy.
2. Usage of `foo(T&&)` signals code-smell or an anti-pattern as `foo(T)` should be used instead unless it is perfecting forwarding / universal reference `template <typename T> foo(T&&)`.
Google builds non-PIE, non-PIC, static, profile-guided, link-time-optimized, and post-link-optimized binaries and probably DGAF about calling conventions.
< 很抱歉,我还未学习到如何回答这个问题的内容,暂时无法提供相关信息
(Google Translate: "I'm sorry, I haven't learned how to answer this question yet and cannot provide relevant information for the time being.")