Hacker Newsnew | past | comments | ask | show | jobs | submit | mathgradthrow's commentslogin

You are literally comparing the US to China on censorship in this sentence.

>As an American

mhm


Please stop. This is not reddit.

As a Belgian, I don't think there is much point in creating a fake account.

We know the US is different than Trump. But yet, here we are, a wannabe dictator is in control now of the US.

We realized that we have way too much dependencies on the US in a very short time.

It's not going to be easy to unbuckle a >70 year old partnership, but not being prepared if he succeeds will be worse, just ask your Canadian friends.

This guy is willing to sacrifice the entire US for quick personal gain. Luckily it seems he still needs the votes end of November.

Good luck


You are comparing Donald Trump to Xi Jinping, an actual dictator. At best this is histrionics.

Food is wayyyy better in America than Europe, are you kidding? Why do you think we're so fat?

Meanwhile, tracking consumption involves error bars that span a factor of 2. Go figure out how many calories are in an avocado. Is that per gram figure amortized for the weight of the pit, or is this just for the flesh?

Counting calories precisely was invented by the processed food industry.


Is it actually anti-europe to ask europe to meet its NATO obligations?


This is a non sequitur that has nothing to do with the comment or articles you're responding to.


>The US government has decided that it is anti-Europe.


Most of Europe does meet them. It is mainly a few countries like Spain which does not.


Whataboutism.

The linked articles are not about NATO obligations.


The American public schooling system in action yet again here. 3rd grade reading comprehension in no way stops them from loudly proclaiming some of the dumbest shit possible.


Literally this.

"The US is gonna have their FO moment aaaany day now, they're gonna regret messing with us Europeans!"

"Bro you haven't even kept your end of the deal on your NATO military spending."

Turns out despite all the hubub, the 'superpower' fading the fastest was Europe after all.


1 in 400 US citizens is diagnosed with parkinsons, if by "thousands", this headline means 5000, then 1 in 2000 US farmers has Parkinson's. Stop it.


Skepticism is healthy. You've found that the numbers don't make sense at face value. The problem is that you stopped there, you haven't even made any attempt at reconciling them with the original claim.

What if the US number of 1 in 400 figure is that high precisely because it includes people exposed to pesticide? In other words, maybe the number would be 1 in 500 if it weren't for Paraquat? You'd have to look at concentration maps or at the very least check what's the diagnosis rates in other countries before you can truly dismiss the claim, imho.


>The problem is that you stopped there, you haven't even made any attempt at reconciling them with the original claim.

What are you talking about? I've done all the diligence that is due. If you want to convince me, you have to actually present your evidence. When you do present evidence, I'm free to assume that the evidence you've presented is your best evidence.

The article starts with a story about an 83 year old farmer with Parkinsons. I'm not going to continue reading after that point. An 83 year old with Parkinsons is not an anomaly, his existence is not evidence of anything. I'm not required to look beyond this point, and I'm absolutely free to comment about that. This is reasonable skepticism. I am not claiming evidence of absence, I'm claiming absence of evidence.

But fine, if you want to look for evidence of absence, then as you say, We need look no further than a random country where paraquat is banned. Paraquat is banned in germany, and there are 80 million people in germany, go google how many of them have Parkinson's disease.

If you are trying to convince me of an effect so small that you cannot even come up with one anomalous Parkinson's case to write a story about, then I don't care.


The article already talks to the numbers they mean and what scale they believe it to be:

> More than 6,400 lawsuits against Syngenta and Chevron that allege a link between paraquat and Parkinson’s are pending in the U.S. District Court of Southern Illinois. Another 1,300 cases have been brought in Pennsylvania, 450 in California and more are scattered throughout state courts.

> “I do think it’s important to be clear that number is probably not even close to representative of how many people have been impacted by this,” said Christian Simmons, a legal expert for Drugwatch.


There are hundreds of pictures of the Loch Ness monster.


I'm not saying you have to believe it, just that rhetorically asking if it's more than 5,000 in the US is redundant when the article already says there are more than that many individual cases about it in a single district court.


I drastically underestimated the number of farmers, who skew older. This is very unlikely to be anything.


Is that just stating a hunch or do you have new data outside the 2 narratives presented in the article driving that?


My grandfather was a crop duster pilot in the 60s-70s. He died of Parkinson's almost 4 years ago today. He is the only one in my family to succumb to this disease. For a brief moment I was relieved to know there was some explanation for his suffering.

Then I read the HN comments. It is beyond infuriating to read a well researched paper with 1300 open cases legal with overwhelming evidence only to be met with "zero chance this is real."


I don't think you would know a well researched paper if it bit you. Legal cases are only evidence that there is money to be made in litigation.


If only we had tools like science and statistics... https://www.sciencedirect.com/science/article/abs/pii/S00139...


The article mentions epidemiological studies showing that people living or working near farmland where paraquat is used have a higher incidence of Parkinson's.

Don't be so quick to dismiss it, there could be a link.


Paraquat was used in horrendous amounts mid century. It may be a dose dependant outcome.


>Is it "signalling" when the left's change was for an accessibility reason, to enable more people to be able to easily read?

Uh, yes.


Here's the actual memo, in case you want to read it yourself and form your own conclusions:

https://daringfireball.net/misc/2025/12/state-department-ret...


can you explain this a little better?


The key insight is that Colin can show you a red-green-blue coloring of the graph, and flip the whole graph secretly, so it's blue-red-green instead when you look at an individual section, but really the graph is yellow-pink-orange colored. Even after showing you all the intersections of the graph individually in the red green blue coloring to satisfy that he can 3-color it, you still have no idea what is yellow pink or orange on his copy of the graph.


I can certainly explain it more, a question of "better" is debatable!

Here's the process:

(A) You give me a graph to 3-colour;

(B) I claim I can 3-colour it;

(C) You demand that I prove it;

(D) I colour it with colours ABC and cover the vertices;

(E) You point at an edge;

(F) I reveal the colours of the vertices at the ends of the edge;

(G) If I have coloured the graph then the colours revealed will always be different;

(H) We repeat this process with a permutation of the colours between each trial;

(I) If I'm lying then eventually you'll pick an edge where either the vertices are not coloured, or the have the same colour.

(J) This process reveals nothing about the colouring, but proves (to some level of confidence) that I'm telling the truth.

So ... what's unclear?

Instructions on how to email me are in my profile if you prefer ...


Ah, I see. This is not an example of a ZKP, because you are relying on a third party who has full knowledge of the coloring, which is wherever you have drawn your coloring.


No, that is not the case. The process does not rely on a third party.

Person A provides person B with the graph.

Person B claims to have coloured it.

Person A demands that they prove it.

Person B hides the colouring

Repeatedly:

* Person A points at an edge

* Person B reveals that the endpoints are differently coloured

* Person B re-hides the colouring and permutes the colours

If person B does not have a colouring, with probability 1 this process will fail and person A will know that person B does not have a colouring.

But if person B does have a colouring then each step will succeed, and by repeating the process person A can achieve any desired degree of confidence that person B must, indeed, have a colouring.

This process can be made digital rather than physical, and no third party need be involved. As a sketch of one step:

* Person B colours the graph

* For each vertex, person B generates a long random string, pre-pends the colour, applies a cryptographically strong hash function to that, and sends the result to A. This "Fixes and hides" the colouring

* Person A asked for two "colours" to be revealed

* Person B provide the associated "colour and random string"s, the pre-images of the requested hashes

* Person A checks the hashes and now knows the colours of those two vertices.

Should I write this up "properly"? It's already discussed elsewhere on the 'net.


Ok, I still contend that this is not a ZKP without the hashing scheme, but I agree that with the hashing scheme, it is.


How do I know/prove that you're not just saying any random two colors for whichever edge I choose?


The version I'm describing has it physically sitting in front of you at the time, so you can see that the colours haven't been changed "on the fly" after you pick an edge. In this version:

(A) I colour it;

(B) I cover the vertices so you can't see any of them, but I can no longer change them;

(C) You choose the edge, and I reveal the endpoints.

Converting this to a digital version requires further work ... my intent here was to explain the underlying idea that I can prove (to some degree of confidence) that I have a colouring without revealing anything about it.

So just off the top of my head, for example, I can, for each vertex, create a completely random string that starts with "R", "G", or "B" depending on the colour of the vertex. Then I hash each of those, and send you all of them. You choose an edge and send me back the two hashes for the endpoints, and I provide the associated random strings so you can check that the hashes match.


This reminds me of the "Where's Waldo (Wally in UK)" example:

You can prove that you found Wally with a large piece of paper with a hole in it. You move the hole over Wally, and the person you're sitting with can see you found it, but he's no wiser about where.


Another way is to get them to put marks/signatures over the back of the blank. Overlay to e blank, and cut Wally out of it where he occurs on the actual page and give them the cutout.


https://youtu.be/5qzNe1hk0oY for a video if you can't picture that.


>An answer usually contains more information than just that one bit.

Isn't the point to ask yes or no questions?


Yes, but you can make assumptions based on what you know about humans generally. Like their example that if you ask if you have long hair. If you answer yes the likelihood is you are probably female.

You can think of all sorts of questions and answers like this, and when you combine with the assumptions and answers from previous answers you can make even more assumptions. They won't always be correct, but you don't have to be "perfect", depending on your use-case. For example for advertising purposes assumptions(even if incorrect) can still go a long way.

There is a reason Target got sooo good at identifying pregnant women[0] before the women knew they were pregnant that they creeped out women, and had to pull back what they did with that information. This was like a decade or more ago. It's only gotten more accurate since then.

0: one example from 2012: https://techland.time.com/2012/02/17/how-target-knew-a-high-...



Even if that one particular instance is false, I seem to remember Target saying their model was too accurate and they were changing how they did things. i.e. Target admitted to predicting pregnancies very well.

Why would they do that, if they didn't think their system was that good?


Maybe to convince other companies to buy Target ads. Advertising companies uptalk how effective their advertisements are to persuade other companies to buy adspace.

Target isn’t going to do something that scares away consumers, like say “our ad tracking is TOO good”, unless there’s another benefit that makes it net positive for them.


> Target got sooo good at identifying pregnant women

That's why I pay with cash and do not have a loyalty card (other customers often offer theirs at cash register anyway). And of course I don't even go to Target.


I don't know if Target specifically use all of these, but I would bet they have data based on at least some of facial/gait/demographic recognition, wi-fi/Bluetooth beaconing, vehicle registrations, time and location tracking, statistical analysis of your purchases and clustering of people you have made purchases next to (e.g. you bought something at same time and till as your mother more then once). I'm sure they have other methods too. They can also combine datasets from brokers that do have a face:name link (say you used a card at another store that captured it and sold the data) and resolve you within their own data that way.


It's still a yes/no question, it's just that the question is "do you have long hair".

The goal of these decision trees is to have as few questions that divide the group in two balanced halves (and also recursively).

If you imagine a binary tree with questions in each internal node, and in each leaf there is a person. You want the height of the tree to be minimized.


Yes, but multiple yes or no questions in combination can easily yield more information than they should in a real dataset. That's the real educational point.


You seem to be confused about the difference between "less" and "more". In general a yes-no question gives less than 1 bit of information if yes and no are not equally likely. There is no way it can be expected to give more.


> There is no way it can be expected to give more.

It is indeed not possible for it to give more, because it only has a single bit answer, which by the pigeonhole principle can't give you more than one bit.

The best yes/no questions are the ones which are independent of each other and bisect the group evenly. "Are you female" is typically good because it will be approximately half the population. Then you want independent questions that bisect the population again, like "does your first name have more than the median number of letters" which should be mostly independent of the first question. Another good one is conditional questions like "are you taller than the median for your sex" since a pure height question wouldn't be independent of sex but that one is.

Whereas bad questions would be ones with highly disproportionate responses, like "do you have pink hair with black and green highlights" which might be true for someone somewhere but is going to have >99% of people answering no, or "were you born on the planet Mercury" which will be 100% no and provide zero bits of information.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: