My understanding is that CH4 breakdown in the atmosphere isn’t a matter of exponential decay either but is (or could become) rate limited by OH radical availability. Such that the half life increases as more is in the atmosphere and it’s possible to totally outpace the breakdown even at constant emissions.
There are already non-self-driving cars that get speed limits from signs. I’ve seen that feature in a Honda for example. I imagine you’d have multiple sources like a max speed for that type of road as a fallback. And you need to read speed limit signage due to temporary limits. There’s also variable speed limit roads near me now and so you have to read those electronic signs unless the database is updated very often (though no humans seem to obey those limits).
They have somewhat different amounts of sugars by volume since the fat is removed, but they don’t have added sugar. The caloric fraction from sugar will be much higher though, maybe that’s what you’re reading?
Pedantry: it doesn’t have to be flat - for example a triangulated parabola could also have that configuration. You only get a topological result from just knowing edge and vertex counts. Now if the triangles are identical equilateral then you’re in business.
What if the triangles are all congruent but not equilateral? Can that even happen? That’s a fun one, so I won’t spoil it.
Not a sphere, we know that can’t happen due to the topological constraint you brought up. Instead picture a float plane tiled by equilateral triangles. Now mark each vertex as either low, middle, or high such that every triangle has one of each. Push the low ones below the plane and the high vertices above the plane by some amount x. Now it’s a bumpy plane full of triangles, each one is isosceles and all identical.
I think that’s the only way to do this, but maybe there are more. Could we get a hyperbolic plane this way? Normally you squeeze extra triangles around each vertex to do that so I doubt it but maybe.
This is what I thought at first but simulating it out, it’s not true. The strategy of switching doors works 2/3rds of the time but only if you allow picking the opened door (when it has the prize). That’s easy to see since it works whenever the prize isn’t behind your first choice. Conditioning on the opened door revealing a goat gives that strategy only 50% chance. After all, the opposite situation happens 1/3rd of the time and has a 100% success rate (since you can see the car), so this other case (opened door has a goat) has to have a 50% chance to add up to the unconditioned success rate. It’s still unintuitive to me though.
"Knowledge" is some physical state of the brain. I don't know exactly, connections in the brain, activation of neurons, different chemicals or electrical signals - it's something, some physical actual state of the brain. How could it be possible for Monty's brain-state to have an effect on the problem?
The obvious answer is the right one - it's not possible and Monty's knowledge has no effect on the game or the strategy the player should use. Monty knowing where the prize is and opening a bad door is the same as Monty not knowing where the prize is and having randomly opened a bad door. In either case, you should switch, Monty's knowledge doesn't change things.
Another way of thinking about it - how would the participant know if Monty opened a door at random or knowingly opened a goat door? What if they thought Monty knew, but Monty had actually forgotten and just coincidentally opened a goat door? Does any of this matter? No, because Monty's knowledge doesn't effect the game or strategy.
I also wrote a quick simulation in my Javascript console which confirms what I'm claiming here.
The "pick and stay" strategy wins about a third of the time while "pick and switch" wins about two thirds - same as the original problem. Writing the code emphasizes that it is basically the same thing - Monty coincidentally reveals a goat versus knowingly reveals.
Strictly speaking it's not the hosts knowledge that changes the problem, but the fact that as a contestant you know how the host operates. This makes you able to extract information about the choice the host makes.
>"Monty Fall" or "Ignorant Monty": The host does not know what lies behind the doors, and opens one at random that happens not to reveal the car. Switching wins the car half of the time.
You stimulated out the situation where Monty Hall always picks the door without the prize. That’s exactly the standard Monty Hall problem. Change your code to instead allow him to choose the prize door (but never the player’s door). Then condition on him picking the goat door by dropping all the cases where he picked the prize door.
You’ll see that that changes it by discarding scenarios where switching is good (the prize is shown to you) but not ones where staying is bad.
I always believed that saying that Monty knowledge affects the odds is just more people friendly way of saying that those are not independet events and simple probabilistic model does not work when Monty is not allowed to select prize when revealing gate - You need something fancy (like a bit of Bayes conditional probability)
The choice happens after the door is opened. If in this variant you are allowed switch to the revealed prize door, you win 100% of the time the prize turns out to be behind the door they open. If you aren't allowed, you lose 100% of the time that happens. So that new "branch" doesn't affect any decision you could make.
But given they do open a door with a goat, the probability is the same as the regular Monty Hall problem. So the parent's point stands: it's not the host's knowledge that makes the game "work". He's leaking information by not opening your door, thus "concentrating" the 2/3 chance it wasn't behind your door into the one remaining door.
It’s not the same. Try stimulating it out if my argument above didn’t convince you. WLOG if you pick door 0 out of 0,1,2 and host picks door 1 always, then the the relevant value is in numpy:
Hmm, you are right. If the host chooses randomly, we get this tree:
- 1/3 it is behind your door
- 1 host picks a goat
- switching always loses -> 1/3 chance of losing
- 2/3 it is behind one of the other two doors
- 1/2 host picks the goat
- switching wins always -> 2/3 * 1/2 = 1/3 chance of winning
- 1/2 host picks the prize
- result undefined -> 1/3 chance of ending up here
So you get 1/3 wins, 1/3 losses, 1/3 undefined. So given the goat is revealed, you have a 1/2 chance to win by switching.
On reflection, this makes sense. The host is indeed leaking information by not be able to pick the around the prize. If the prize is not behind your door, his choice is constrained. So his revealing it supplies information about that constraint. If he is not constrained, then you are both picking doors at random. I'm sold.
That person, and, sadly, now you too, are completely wrong. What the host knows or does not know is completely irrelevant. Imagine:
You are playing the classic Monty Hall problem. You pick door 1. Monty reveals a goat behind door 2. Monty asks if you want to switch. You know the correct strategy is to switch, so you are about to say "Yes - switch" when suddenly, the lights dim and announcer's voice booms over the speakers "You thought Monty was knowingly revealing something to you, but actually Monty just revealed a door at random. He had no foreknowledge." Are you now ambivalent about switching or staying?
If you still want to switch, and you should, that's because obviously it does not matter what Monty knew going into the problem. If you don't want to switch, please explain how the contents of Monty's brain affect the probability of which door conceals a car and which a goat.
You make the decision based on your priors: the conditional probability of the prize being behind a certain door is updated by the new information. The information content of that update can certainly be affected by your knowledge of his constraints. In the original Monty Hall problem, that knowledge you have is “he can’t reveal the prize”. There is nothing magic about “the rules were x, and given that the rules were x, my update of the probabilities is y”. It has nothing per se to do with his mental state; it has to do with the rules he had to follow, and what you can infer from them.
Let’s turn this around: explain why I should switch doors, but starting from scratch with the new problem, instead of by reference to the original. I think you’ll end up thinking the original solution is wrong, based on your no-telepathy rule, or you’ll see how they are different.
Or try this out. Let’s have another variant: before you pick the door, the host picks one that turns out to be a goat. Now you pick a door, and _then_ you have the option to switch. Do you still switch? Does that make sense? The situation is exactly the same (a goat behind one door and two closed doors). If both actions so far are random, it doesn’t matter what order they go in.
I was a machine operator in the 2020 election in Philadelphia. It’s as you describe, everyone there (4-6 people) have to sign off at the end of the night on the totals. Two people are specifically one from each of the major parties, so it should be bipartisan. The numbers from the machine have to add up to the number of voters in the book. You’d need multiple accomplices to hide the mismatch, at least three people I think? Even then, you’d have to write down the names of who “voted” so it could come to light if any of those voters checked and saw that they had an unexpected vote. The machines did change so it may have been easier before.
And further if these people were apparently also telling voters who to vote for, then they must have had all the election officials there in on it. Supporting a candidate like that is absolutely not allowed for the poll workers so this was blatant and didn’t care about whether it was uncovered. Any voter could have reported that behavior at any point. It was done out in the open.
This already exists in some fields though. Gene expression sequencing data is almost universally made public through the Gene Expression Omnibus website, and that’s quite storage intensive. It’s used since regulators and journals require it to be used.
> It’s used since regulators and journals require it to be used.
Which answers the "what benefit does it provide to the researchers publishing the data" question. A quick search answers the funding question as well, it's funded by the NIH, not the individual labs using it.
I think this example supports my point. The NIH came up with a way to give different answers to the two questions I asked, and it gets used. I'm glad the NIH has been making this a thing, it's a great use of public funds.
I'd still caution anyone from trying a "make a data platform and researchers will use it" approach to the problem unless they can answer those questions.
It makes sense that it would have weird connections but the big claim here is that it outputs those connections as rendered text despite failing to output actual text is was trained on and prompted with. That sounds very unexpected to me and requiring a lot of evidence (that would be easy to cherry pick), though this debunking wasn’t convincing either.
https://en.m.wikipedia.org/wiki/Atmospheric_methane