Hacker News new | past | comments | ask | show | jobs | submit | e____g's comments login

> Is Xi Winnie-the-Pooh?

< 很抱歉,我还未学习到如何回答这个问题的内容,暂时无法提供相关信息

(Google Translate: "I'm sorry, I haven't learned how to answer this question yet and cannot provide relevant information for the time being.")


No, those people are almost certainly paying far below list price.


There's probably some enterprise level deals going on there (as with every service provider), but they will still be paying them A Lot of Money every year.


Not much, but they'll benefit much more in the short term in reduced taxes by writing down those assets to zero.

Edit: this was downvoted, and I don't understand why. Am I wrong in thinking this action was made in pursuit of a write down? FWIW, this wasn't a thoughtless comment by a random Internet passerby; I hold 41,905 shares of PARA.


It's worth mentioning AES-GCM-SIV[1], which is the fix for this issue.

[1] https://www.rfc-editor.org/rfc/rfc8452.html


The alternative, which I prefer, is an XGCM-like construction that just gives you a large enough nonce to comfortably use random nonces.


+1, soatok has a write-up of how that works: https://soatok.blog/2022/12/21/extending-the-aes-gcm-nonce-w...

...a variant on that is DNDK-GCM in draft at https://datatracker.ietf.org/doc/draft-gueron-cfrg-dndkgcm/ and a recent presentation: https://youtu.be/GsFO4ZQlYS8 (this is Shay Gueron who worked on AES-GCM-SIV too).


AES-GCM has a 12 byte nonce if I recall correctly. Is 96 bits of entropy insufficient to guarantee uniqueness every time it’s generated?


Only if you're not encrypting many billions of small messages with the same key, which is a possibility. It's just barely large enough for many uses, and "just barely" makes cryptographers nervous.


No. Extended-nonce constructions solve that problem by using the "large" nonce along with the original key to derive a new key. You then have the "small" nonce space plus the key space worth of random bits.


Could this be extended to give us XOCB? I am not sure it would make much sense with the OCB size recommendations.


The "fix" is to use a nonce misuse resistant cipher, of which AES-GCM-SIV is one.

But, AES-GCM-SIV requires two passes over the data, which isn't always ideal.

The goal of the CAESAR competition [1] was essentially to find alternatives. Whether that goal has been met is a bit unclear at the moment.

[1] https://competitions.cr.yp.to/caesar-submissions.html


> The goal of the CAESAR competition [1]

https://en.wikipedia.org/wiki/CAESAR_Competition


At this point OCB has an expired patent, and only needs one pass over the data:

* https://en.wikipedia.org/wiki/OCB_mode


From the OCB FAQ[1]:

>What happens if you repeat the nonce? You’re going to mess up authenticity for all future messages, and you’re going to mess up privacy for the messages that use the repeated nonce.

The loss of privacy on OCB nonce reuse is not as severe. It would be more or less the same as with ECB mode.

[1] https://www.cs.ucdavis.edu/~rogaway/ocb/ocb-faq.htm


The next few lines are:

> It is the user’s obligation to ensure that nonces don’t repeat within a session. In settings where this is infeasible, OCB should not be used.

But earlier in that section we have:

> […] The nonce doesn’t have to be random or secret or unpredictable. It does have to be something new with each message you encrypt. A counter value will work for a nonce, and that is what is recommended. […]

* https://www.cs.ucdavis.edu/~rogaway/ocb/ocb-faq.htm#nonce

So given that GCM uses a counter ("C"), and a counter is recommended for OCB, wouldn't it be simple enough to get the equivalent (?) security more efficiently?


The notion of a nonce here is the same as that in GCM. GCM nonces aren't secret and don't need to be unpredictable; in fact, because the nonce space is so small, a common engineering recommendation is to use a durable counter.


Given that OCB (appears to be?) is more computationally efficient than GCM, is there any reason why OCB shouldn't be favoured nowadays given there are no IP issues?


I like OCB and dislike GCM, but GCM is very, very fast and is the de facto standard AEAD, and the runner-up is Chapoly. OCB would be a quirky choice, and maybe trickier to get in every ecosystem you develop in (I ended up writing my own back in the early days of Golang).


OCB is superior to AES-GCM-SIV in every way other than nonce reuse. OCB is faster than generic GCM for any combination of hardware acceleration. OCB is also significantly better than generic GCM for nonce reuse.

GCM-SIV is not perfect for nonce reuse anyway. It reveals to the attacker that two messages are identical.


Yes, that sucks. Blame ahenry@, then GM for S3.

My proposal was that parts of incomplete uploads would stick around for only 24 hours after the most recent activity on the upload, and you wouldn't be charged for storage during that time. ahenry@ vetoed that.


Why would you propose something that makes company earn less money? I'm sure that at Amazon scale, this misfeature earned millions of dollars.


Customer relationships. I recall a Bezos quote along the lines of "It's better to lose a refund than to lose a customer".


My thoughts and prayers for the Nickelback fans outed by this breach.


No, zinccat, you are wrong. Your model is vastly inferior to GPT4.

I tried multiple prompts. In all cases, your model performed far worse. Here is one:

> My bank says they'll pay 5.5% APY. How much will I have after 6 months if I deposit $100?

< If your bank is offering a 5.5% APY (Annual Percentage Yield), this means that the interest is calculated annually. However, you want to know how much you'll have after 6 months, which is half a year. ... 100ata5.5102.75 after 6 months.


(2010)


User: Which is heavier, two pounds of feathers or a pound of bricks?

Gemini: Neither! They both weigh the same, which is two pounds. The key is in the wording of the question. You're comparing equal amounts of two different materials (two pounds of feathers vs. one pound of bricks). As long as the quantity is the same, the weight (force exerted by gravity) will always be the same regardless of the material.

This is a classic trick question that plays on the different densities of feathers and bricks. While bricks are much denser than feathers, meaning they pack more mass into a smaller space, the question specifies the weight by quantity, not volume. So, both two pounds of feathers and one pound of bricks experience the same gravitational pull and therefore weigh the same.


Interesting. Based on this conversation[1], I think Gemini Ultra is massively overfit. Make it do unit conversions or use units it hasn't seen in the same framing before and it does well. But stay close enough to the original trick question (1 and 1) and it fails.

[1] https://g.co/gemini/share/94bfb8f9ebea


Both got4 and Gemini answered this variation correctly: one pound of potatoes vs. one pound in paper British currency: which of these is heavier?

However gpt4 does better with the more ambiguous version pointing out the ambiguity: one pound of potatoes vs. one pound in paper currency: which of these is heavier?


This is surely in the training data.


I saw std::function and std::string (e.g. TotW 127, https://abseil.io/tips/117) being passed by value a lot in newer google3 code. Both are larger than 16 bytes.


This is done so you can use std::move to take ownership of the allocated memory in these objects rather than do a new allocation. Passing by value rather than rvalue reference let's your function be more flexible at the call site. You can pass an rvalue/move or just make a copy at the call site which means the caller (who actually knows if a copy or a move is more appropriate) gets to control how the memory gets allocated.

An unnecessary memory allocation is much more of a performance hit than suboptimal calling convention.


In that case, the optimal interface should take std::string&& no? But it's awkward.


The vanilla interface often allows what's right depending on the context (e.g., if an unnamed return value is passed in, it can be move constructed, otherwise it can be copied). I saw some code bases provide an override for a const ref and a normal type.


Wouldn't this be very annoying to work with, because now you have to explicitly move or copy the string whenever you want to construct one of these objects?


It's kinda what Rust forces you to do, except that std::move is implied. Anything taken by value is equivalent to taking by && unless the type is explicitly marked as Copy (i.e. it can be trivially copied and the copies are implicit).

But yeah, in a c++ codebase, good modern practices are often verbose and clunky.


The function could accept an universal reference instead of an rvalue reference, this avoids the dance the caller has to do to pass a copy.

IMO it's hard to beat pass by value considering both performance and cognitive load.


Yeah, making it accept a universal reference would fix it...

...but that requires the argument to be a type from a template D: so you'd have to write:

    template<typename String = std::string>
    void dosomething(String &&str)
...and that's not quite right either, since you'd want it to be either an rvalue reference or a const lvalue reference


Explicity having to copy or move is a desired coding style IMO.


Passing std::function by value is almost definitely wrong these days with absl::AnyInvocable (if you need to store the type) and absl::FunctionRef (if you don't). Rough analogues in the standard are std::move_only_function (C++23) and std::function_ref (C++26).

std::string in the case you cited is only really relevant if you std::move() into it and you would otherwise incur a string copy. Yes, it's bigger than 16 bytes (24 bytes), but that pales in comparison to the alternative.

(Taking std::string&& would eliminate the possibility of misuse / accidental copies, but that pattern is generally discouraged by Google's style guide for various reasons.)

Also, just because you see a certain pattern an awful lot even at Google doesn't mean that it's best practice -- there are plenty of instances of protobufs being passed by value...


Please prove me wrong on these points. My current belief and understanding is that:

1. `foo(T)` indicates polymorphic over both `T::T(const T&)` and `T::T(T&&)`. This gives you benefits of both pass-by-move (using `std::move` as needed), or copy.

2. Usage of `foo(T&&)` signals code-smell or an anti-pattern as `foo(T)` should be used instead unless it is perfecting forwarding / universal reference `template <typename T> foo(T&&)`.


Google builds non-PIE, non-PIC, static, profile-guided, link-time-optimized, and post-link-optimized binaries and probably DGAF about calling conventions.


I have seen the assembly output of Google code, and I will say that my previous comment still stands.


not trivially copyable types are never passed in registers regardless of size


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: