Hacker News new | past | comments | ask | show | jobs | submit | shiftingleft's comments login

Do they help deter people from becoming smokers in the first place?

I doubt that this is a problem in need of a technical solution. In any case, this system can easily be circumvented by emulating the key presses on that website.


Looking at a few files, there's definitely some generated comments in there. Do you have any method to quantify how much of it is (likely) generated?


> Shouldn't be too hard to do even with pen and paper since the 2-adic eval of 52! is large.

Could you elaborate?


https://en.wikipedia.org/wiki/P-adic_valuation

It's nothing fancy, get the prime power decomposition of your number and pick the exponent of p.

There's a clever way to do that for a factorial, but I have the Pari/GP app on my phone so I just did:

    valuation(52!,2)
which gives the answer 49, so 52! is divisible by 2 forty-nine times. Interestingly chatgpt4 turbo got it right with no extra prodding needed.


One can do this mentally easily enough. 52! = 525150....321, and there are 26 even numbers in this progression, so we have 26 powers of 2. Taking them out, there are 13 of those even factors divisible by 4, but we already took out the first 2 from each so we have 13 more 2's, giving 26 + 13 = 39. Now on to factors divisible by 8, they are half of those that are divisible by 4, so half of 13, giving 6 more 2's (52 is not divisible by 8 so we round down). Thus so far we have 39 + 6 = 45 two's in the factorization of 52!. On to numbers less than 52 that are divisible by 16, that's half those divisible by 8, so 3 more, getting us to 48. Finally there is there is the factor 32 = 2^5 of 52! giving us one more 2, hence 49. i.e. for p a prime, the largest k such that p^k divides n! is given by k = Floor(n/p) + Floor(n/p^2) + ... + Floor(n/p^t) where p^(t+1)>n


Does not seem right, the number is way too low.. after all, just the last factor (52) can be divided by 2 at least 5 times.

My calculator says 225 bits, and text suggests the same. Looks like chatgpt4 was wrong as usual:)


The 2-adic valuation is about how often 2 is a prime factor of a number.

For just 52 for example 2 is a prime factor twice, because (52/2)/2 = 13, which is no longer divisible by 2.

Or in other words 52! / (2^49) is an integer, but 52! / (2^50) is not, thus 49 is the correct answer.


When you see something that doesn't look right it's good to engage and work things out, but it's also courteous to check that you haven't misunderstood. I see how you could arrive at the understanding you had, that "how many times you can divide by 2" is equivalent to base-2 logarithm. It's not the right interpretation however, and in context it's clear.

Could I recommend phrasing this kind of comment as a question in future? (Notwithstanding the lifehack of making a false statement in the internet being the shortest path to an answer.)


Fair, I should have rephrased the comment to more directly reference the thread-starter, which is encoding bits "using lexicographic order of the permutation, doing a binary search each time." It's not that your computation of 2-adic decomposition is wrong, it's the idea that using 2-adic decomposition produces the number that is too low.

Let me elaborate:

I am not 100% sure what user qsort meant by "binary search", but one of the simplest manual algorithms I can think of is to use input bits as decision points in binary-search-like input state split: you start with 52 cards, depending on first input bit you take top or bottom half of the set, then use 2nd input bit to select top or bottom of the subset, and so on, repeat until you get to a single card. Then place it in the output, remove from input stack, and repeat the procedure again. Note there is no math at all, and this would be pretty trivial to do with just pen & paper.

What would be the resulting # of bits encoded this way? With 54 cards, you'd need to consume 5 to 6 bits, depending on input data. Once you are down to 32 cards, you'd need 5 bits exactly, 31 cards will need 4-5 bits depending on the data, and so on... If I'd calculated this correctly, that's at least 208 bits in the worst case, way more that 51 bits mentioned above.

(Unless there is some other meaning to "51" I am missing? but all I see in the thread are conversations about bit efficiently...)


To be clear I agree with your interpretation about how much data you can store in the deck permutation and how to search it, my previous comment was only about p-adic valuations. I can't actually see how the 49 is relevant either.


And 50 of the factors of 52! are greater than 2.


The basic method would be to assign a number, 0 through 52!-1, to each permutation in lexicographic order. Because 52! is not a power of 2, if you want to encode binary bits, you can only use 2^N permutations, where that number is the largest power of 2 less than 52!. You can not losslessly encode more than N bits, that's a hard bound, they just won't fit.

If you wanted to turn this into an actual protocol, you would presumably flag some permutations as invalid and use the other ones. You would then encode one bit at a time doing a binary search of the set of valid permutations.

Because 52! has a large number of 2s in its factorization, for a careful choice of the valid permutations it should be practical (or at least not significantly more impractical than the OP's proposed method) to perform this by hand because you would be able to "eyeball" most splits of the binary search.


Their talk was quite nice, they talk about experiences with other HSMs, their history, what lead them to design their own, the many aspects of their design and then go through potential attacks:

https://youtu.be/zD5EdvGs98U?t=13m23s


> The article mentions the Cochrane Review which rigorously concluded the opposite.

Do you mean this one?

"Many commentators have claimed that a recently-updated Cochrane Review shows that 'masks don't work', which is an inaccurate and misleading interpretation.

It would be accurate to say that the review examined whether interventions to promote mask wearing help to slow the spread of respiratory viruses, and that the results were inconclusive. Given the limitations in the primary evidence, the review is not able to address the question of whether mask-wearing itself reduces people's risk of contracting or spreading respiratory viruses."

https://www.cochrane.org/news/statement-physical-interventio...


Yes, that one. From the "We need scientific dissidents" article this thread is about:

When Tom Jefferson and his group published a report saying “We are uncertain whether wearing masks or N95/P2 respirators helps to slow the spread of respiratory viruses based on the studies we assessed,” the editor in chief of Cochrane apologized for the wording, even though subsequent surveys showed the language was standard for Cochrane given the nature of the evidence.

The incoherent attempt at walking back the study findings by Cochrane administration is the type of problem the article is discussing. It came after a pressure campaign by a social media influencer [1] and the New York Times [2], not due to any actual problem with the review (which AFAIK remains unaltered).

The actual study authors stand by their conclusions. But consider something else: the statement on their website is nonsensical, asserting that it's wrong to accept the null hypothesis in this case despite a large multi-study failure to find significant results. But that's not how science works. You start by assuming the null (community masking/mandates don't work), and then try to disprove it. If you can't then you stick with the initial belief that there's nothing there, you don't assert that anything failing to find what you want is "inconclusive" - that's starting from a conclusion and working backwards.

[1] https://twitter.com/thackerpd/status/1644306405942255617?s=2...

[2] https://dailysceptic.org/2023/04/13/the-new-york-times-is-su...


The thing is people elide the correct conclusion of that study to "masks don't work" which is not what the study says, and it is actually a hypothesis that has been roundly disproven... there are numerous studies showing the efficacy of mask wearing for preventing the spread of infectious diseases. They apologized for the wording for a good reason, which is that people took it out of context to suggest something that is not only not what the study said but contradicts a variety of other research.


This is a hard one... the parent commenter mentioned that there should be some indication about when people were told to wear masks in the charts that show the spread of the virus in at least some countries. That's difficult to see anywhere... the study you link to says that they were simply unable to show whether masks are effective because of "the high risk of bias in the trials, variation in outcome measurement, and relatively low adherence with the interventions during the studies".

Let me give a little anecdote about that... Brazil was one of the worst affected countries, despite having made it mandatory to wear masks. Sweden, on the other hand, only made it mandatory to wear masks in a few very limited situations (e.g. public transport), and even then, only after the pandemic was already dying down, much later than most countries. And Sweden seems to have had a below OECD average rate of deaths due to the pandemic.

I know it's a difficult comparison to make: Sweden's healthcare system is likely more "competent" than Brazil's (because it can afford much more, but both have free or nearly free healthcare available to everyone) and people in Sweden tended to be less skeptical of the virus (personal experience, not sure this can be shown by data) - that makes a big difference as people in Brazil would often wear a mask just because they were forced to, and hence wore it incorrectly and didn't really try hard to make it effective, while in Sweden people did it by their own accord (for the longest time, Sweden only recommended to wear, but did not make the mask mandatory) and were much more likely to have done their research about how to better make use of the mask to avoid getting infected.

Also, it has been shown that most deaths in Sweden occurred early on, among the elderly living in nursing homes where employees (who are almost always foreigners with a very different culture and hence, I suggest, less likely to properly wear masks and follow government recommendations to contain the spread of the virus, like completely avoiding meeting people who are not living in the same household) were the main source of infections - so if you take that into account, the fact that people in Sweden were mostly not wearing masks at all for most of the pandemic should show that, at the very least, wearing masks was not the most effective way to keep the virus under control.

My takeaway is that masks may help, but only if you actually believe it will help and take sufficient care to wear a proper mask and do it properly... and that other measures, like voluntary social distancing, turned out to have been more effective than just wearing masks.


> foreigners with a very different culture and hence, I suggest, less likely to properly wear masks

What does culture or nation of origin have to do with being able to wear a mask properly?


The answer is right there in the quote you decided to cut short for some reason.

Culturally, Swedes trust their government a lot more than people do in other countries.



This is a new version (v2) of their paper.


Can you tldr; the differences and corrections? arxiv has unfortunately nothing to show the errata.


There's some quite complex cryptographic machinery called Direct Anonymous Attestation that would make this possible. I don't know if they plan on using this though.


> Chia replaces Nakamoto’s energy hungry proof-of-work consensus with an eco-friendly proof-of-space.

Don't be fooled, proof-of-space will eat up SSDs like no tomorrow and you're still hoarding HDDs for little gain.

Proofs of space prove that you're storing some useless data. Typically you use HDDs to store the useless data, but to prepare such a HDD you need an SSD. The setup process will perform so many writes that the SSD will be garbage very soon.


"Proof of" is a euphemism for "waste of resources" in general.


Only in cryptocurrency, i.e. in the cargo cult of finance.

In real world finance a common stock is proof of making a useful investment at some point in the past, when the stock certificate was issued in return for capital. Same with other tradeable and non-tradeable assets.


But this isn't proof, that financial stock certificate. It's actually fiat: the clearinghouse or the national registry says who owns what and the court can change it.

The crypto version of proof is closer to a bearer bond or a bar of gold: whoever has possession has possession.


Yes, you pointed out why the analogy isn't perfect, all the while missing the point entirely.

In cargo cults it would be like complaining that the control tower made of wood has the wrong number of windows, all the while missing the fact that you don't have any aircraft.


No, the point I'm making is that there are conflicting definitions of proof here. The same word means different things.


If two crypto bros go to court, the one who refuses to follow the judges ruling gets to stay in jail. How does crypto help with that? Possession is 9/10ths, but the state is 100%.


Why can't they use Folding@Home as proof of something?


For something to be used in PoW, it needs to be both hard to solve and easy to verify. Protein folding checks the first box but not the second. As an example, prime number calculations check the both boxes and there's Primecoin which bases its proof algorithm on prime chains.

This only applies if you prefer to be trustless though. If you are willing shed a bit, you can trust Folding@Home's scoreboard data (and whoever supplies the data to you) to use it as a pseudo-"PoW" and run a proper proof algorithm beneath. (which is also done by a number of coins like Banano, Curecoin, Gridcoin. You can see their people on the leaderboard with weird names like x_ALL_x, x_GRC_x or just bunch of non-sensical alphanumeric characters.)


Do you know PoW could work with backup data? It would be great if people could have a little backup space for the price of some compute, instead of cash or "free" backup.


Are you looking for Sia and Storj?

I would say Filecoin but very strange things are going on with that coin and I can't wrap my head around how you're supposed to actually store things.


Well, I used F@H as an example. Are there truly no PoW algorithms that satisfy both conditions and actually represent some useful IRL result?


It's a pretty narrow set of calculations that are very expensive but easy to verify. You have to be looking for needles in a haystack, and the haystack has to be gargantuan but not involve all that much data transfer.

So unless you feel that it's useful to find prime numbers, good luck.


The more "useful" you PoW algorithm is, the cheaper 51% attacks on your chain will be. To be maximally secure, a PoW algorithm should only generate useless data.


If you're doing it seriously you'll create plots in RAM and your SSDs will be fine.

Still a waste of space, but a gentler one.


It looks like this is a popup for a different setting. Did you watch the video outlined in the post?

The author is arguing that such a popup should also exist when locking a vault with a PIN only.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: