And I don't know how many times I, and the other guy in the guild who understood statistics, had to explain to the others that random doesn't mean uniform. If a certain piece of armour has a 1/X chance to drop from a boss, what people think should happen is that if they kill that boss X times, they should see it drop once.
But the reality was of course that loot was very non-uniform. Some pieces we saw lots of times, and other pieces very rarely, despite them having the same drop chance. And the players who wanted those pieces that happened to be rare for our guild, got very, very angry.
We saw the same things on the official message boards, players were furious after having spent a year killing the same raid boss once a week, and never seeing a certain piece drop for them. But simple math shows that with million of players, tens of thousands of raiding guilds, some of those will see very streaky results.
These days in World of Warcraft, boss monsters drop tokens instead, and when you have X tokens, you can exchange that for a piece of armour, or a weapon, guaranteed. And noone complains about the random loot anymore.
I was bit by another random quirk in World of Warcraft. There was a long-running achievement called "What a Strange Long Journey it has Been", which took at least a year in real-time to complete, and you had to actively play during each of the ten or so in-game festivals and holidays, and do some quests and tasks during each. If you missed a festival, you had to wait another year to do it again, so if you were aiming for the achievement, you really wanted to do it in one go.
During their version of Valentine's, each player got a bag of heart-shaped candy, and the task you had to complete was to pull out at least one each of the eight different heart candies. But you could only pull a piece of candy once every hour or so, each time you pulled a piece you had a 1/8 chance to get a certain piece, but the holiday was time-limited, so you had about two weeks to complete it.
Sounds easy and fair, right? 1/8 chance, get all eight pieces, two whole weeks, easy! Except that there was one piece that I just never got. The piece that said "I LOVE YOU!". And as the time went by, I got more and more frantic, logged in more often so as not to miss any opportunity to pull a piece of candy, but no luck.
So, I did a quick bit of math. You can pull one piece every hour for two weeks, but sleeping, not playing, missing days etc, meant that I effectively pulled ~100 pieces. The chance of missing a certain piece is 7/8, so missing a certain piece 100 times in a row is (7/8)^100, which is roughly a little bit more than one in a million.
With ten million players all doing the same thing, there's going to be a number of them that will hit that "one in a million" chance, which means that whoever designed that part of the achievement didn't do their homework, didn't do the math. Because intuition tells you that 1/8 chance is plenty! Hundreds of tries, of course everyone will get all pieces! Except proper math tells you otherwise, and I was one of the "lucky" outliers.
(Later, they apologized and retroactively removed that part of the achievement, so I got my purple nether-drake mount without having to wait a year extra.)
The chance of failure would be 8 times that much because there are 8 pieces you could miss, which comes to about 12.7 per million.
But a uniform distribution is random, isn't it? This sort of phrasing is rampant in this thread, and I'm confused by it. How can a uniform distribution be considered non-random, or "less" random than another distribution?
Read the article for more details and examples.
Well, you're still assuming the colloquial definition of "random," which implies "uniformly random." Of course, we can have a random process that is not uniformly random.
The results of a uniform distribution, applied randomly a large number of times are less uniform than intuition predicts.
A little randomness is good, it keeps the experience from being monotonous, but there is a balance between uncontrollable and unpredictable annoyance and quirky interesting outlier behavior. A lot of why WoW did so well was because it often walked the fine line, and it did it everywhere. In that game crafting had percent chances of giving skillups, you had to level weapon skills with a percent chance per hit of getting a point, you had critical % chance, hit % chance, dodge %, parry %, block %, you had talents that gave you % procs for special effects. Tremendous amounts of RNG that most people never even noticed but kept them playing for so long in many respects.
The other example was in civilization they showed percentage chances of winning a battle. The problem is that when a player would have three battles with say a 50% chance of winning and lost them all, they would get very frustrated. So they changed it to be much more like the average person perceives statistics to be. This means actually weighting the percentages differently from what the player is told so that if a player has a third of a chance to win each of three battles, it's actually very likely they will win one of them.
This is a (persistent) myth. None of the Civilization games actually do this. Civilization 4 is the only game in the series that displays combat prediction as a percentage chance, and the source code for Civ 4's combat engine is publicly released and well understood. (I am a developer on a Civ 4 modding project and familiar with the code.) The earlier Civ games before 4 don't display any kind of combat odds, and Civ 5 has a totally different system that isn't based on winning percentage. So, unless you're talking about some other game like Galactic Civilizations or a mod for Civilization 4, this isn't true.
Your underlying point is certainly true, though: gameplay often is perceived as a better experience with "randomness" smoothed out to produce more uniformly distributed results.
This is the distinction that the original article highlighted and gives rise to the Poisson distribution, which can have unintuitive results because it is not a uniform distribution. You are describing a uniform distribution. (A Poisson distribution characterizes the number of hits you would expect after running some number of iid random events.)
Wesnoth is a good example of a game which is frustrating for many people. I think the reason is that it has random outcomes of decisions. Suppose you order a unit to attack another one. There's a base 0.6 chance to hit. The unit has 3 attacks each dealing 12 damage. What is the expected damage output ?
- 0 damage: ~ 6%
- 12 damage: ~ 28%
- 24 damage: ~ 43%
- 36 damage: ~ 21%
Counterexamples: Neuroshima Hex (board game), Seasons (board game), Mission in Space: Lost Colony (flash game). In all of these games randomness is used to improve variety. In MIS there's actually little randomness and it feels more like a puzzle. The most visible effect of randomness in the game is randomly chosen alien spawned. All 3 aliens have the same HP and movement speed, so it's not a big deal. Seasons is a card and dice game, but an unusual one. Each side of dice gives some options to choose and there are no good or bad sides, there only good or bad sides at this very moment. The cards you start with are not really random either, because the game uses draft mechanic: you take one card from your hand and pass your hand to your neighbor on the left. Neuroshima Hex is essentially a game with hexagonal cards. Randomness only determines what cards you draw, and once you do you place the hex-cards on the board. Once the battle resolves, it's deterministic.
The bottom line: I noticed that I much more enjoy games where randomness is used not to determine success/failure, but instead options available. That is, as long as the options are balanced relative to each other. Heroes of Might and Magic 3 has randomly selected spells and skills at levelup, but they range from awesome to pathetic. The result is frustration.
All of the games I mentioned (Except Heroes 3) are free. You can actually play both board games online, too.
Another classic example of this randomness-pain is earlier versions of the Civ series, where you could lose tanks against pikemen or other ancient unit types. Not likely, but it hurt when it happened.
Players can then start counting. E.g.: I know '8' came out 7 times and '6' only came out once, then it changes my strategy because there's a benefit to expand with a new settlement (or make a "city") on an '6' instead of an '8'.
I'm not against it: it's interesting. But it's not really the Settlers of Catan anymore: it's a game which happens to share a lot of rules with the Settlers of Catan but which is definitely different.
This reminds me of Magic: the Gathering and the misery of mana screw.
I disagree with the uniform "should be", because there are so many different design objectives. Slot machines aren't "good games" by Euro standards, but people spend a lot of money on them. (One can argue about whether that's enjoyment or addiction. I'll skip that for now.) Random payoffs can foster enjoyment, as seen in Skinner Boxes and on slot machines. Random denial makes people unhappy. What you want in most games is some degree of random windfall but no one getting "killed by the dice".
Have you played Ambition, by chance? It's a trick-taking card game designed to remove card luck.
Incidentally, apparently people complain about the online versions of the game providing a land distribution that's "too even".
Also, while mana screw sucks, between the fact that you can decide whether or not to begin with a certain opening hand and the fact that you can design your deck around such circumstances (mana fixing, proper distribution of card costs), it's far more fun than the alternative.
Have you actually tried the alternatives ? There's a trivial variant of M:tG where you can play any card face down to act as a land.
You have a point, though - constructing a deck in M:tG is far more fun than playing one.
That's not really eliminating randomness, since you're just evaluating (at runtime) whether or not a given card is more valuable as a mana source or at face value. There's still some randomness in determining how high the opportunity cost of playing a land is.
A better example of nonrandomness - which I have considered playing with my friends - is enforcing an even mana draw by increasing access to mana each turn in a manner proportional to the number of lands in the deck. That's deterministic.
> You have a point, though - constructing a deck in M:tG is far more fun than playing one.
I've been playing Magic: The Gathering for far longer than I should admit, and I'd disagree.
Online bridge has a similar problem. Four riffle shuffles (which does not fully randomize hands) is typical in bridge games, and this means that long suits (and, thus, better hands) are more common. Early online bridge games randomized hands fully and therefore delivered crappier (flatter) hands than people were typically used to.
between the fact that you can decide whether or not to begin with a certain opening hand and the fact that you can design your deck around such circumstances (mana fixing, proper distribution of card costs), it's far more fun than the alternative.
When I played (mid-90s) a lot of those options didn't exist. There weren't mulligans unless you had no land. With one, you had to play it. Also, a lot of the newer mana sources didn't exist. If you drew only 2 lands in your first 10 cards, you were screwed, but you couldn't develop an interesting deck with more than 22 land.
One of my favorite decks was an all-common blue/red control deck with 28 land, made so that I could bring it into school and not worry about it getting stolen. It was maddening to play against - I'd be like "Okay, I'll flood 3 of your creatures, block your knight with my clay statue, and counterspell your disenchant." Typically games would run 40 turns with nobody doing any damage, and then all of a sudden I'd be like "...and I'll Lava Burst you for 20. Game over." Or the Storm Shaman would come out around turn 10, by that time it's 5/4 and I've got enough mana to counterspell any attempts to remove it, and the game is over in 4 turns. Land can be devastatingly effective with cards made to take advantage of it.
One of my favorite decks is loosely based on a 5-color preconstructed deck from the Apocalypse era - it's basically almost all mana fixing, with a few cards that take advantage of multiple colors, and then four "Life/Death"s
All that time you spent doing nothing but playing land-generating spells suddenly pays off when you can attack with twenty 1/1 creatures in a single turn, then declare them unblockable and rinse and repeat the next turn.
It's even better because land-destruction spells are much rarer (and more costly), so you're impervious to Wrath-of-God-eque spells.
 If, by some miracle, they had enough life to survive, that is.
Are you saying that all online bridge games today do not fully randomize the shuffle, but simulate riffle shuffles instead?
This may actually be somewhat of a myth. I haven't verified it myself. I do know that Bridge protocol is 4 riffle shuffles, and typical riffle shuffles don't have enough entropy for 4 of them to randomize the deck (52! ~ 2^225.6, so you'd need 56.4 bits for each, and riffle shuffles have about 30 bits) but I find it hard to envision why this (possibly slight) lack of randomness would manifest itself in higher frequencies of good hands.
In Everquest when you cast a spell, there was a chance the spell would fizzle.
Occasionally, someone would get a streak of fizzles. Many argued this was just the normal streakiness you expect when you are do repeated independent trials. Some insisted that fizzles streaks were more common than you'd expect based on that model.
Finally, a statistician who also played EQ spend some time gathering data, and determined that fizzling was not independent. Your fizzle chance was an increasing function of the length of your current fizzle streak, up to a cap.
Things like that give a game character.
The raid boss Onyxia (a great big dragon in a cave, which you could only kill once a week as i recall) had a phase where she would take off and fly above the raid. Once in a while she would then do a deep breath, a fire attack which had a high chance at killing you if it hit you.
Now, all sorts of tactics emerged on how to handle the fire attacks. One that stuck for a very long time was that the number of DOTs (Damage Over Time) would lower the number of deep breaths you got.
I'm sure there were other theories as well, but this one stuck for years with raid leaders yelling at warlocks, the class with most dots, to add more dots and even stacking the raid with warlocks to get as many dots as possible (we need at least 4 warlocks to take her).
In the end there was a developer (or just Blizzard employee) who admitted that it was in fact random. This story pops back in my head once in a while because I think we do this a lot on larger scales too without realizing it: See some phenomenon caused by randomness, come up with a theory, and because it is not easily refuted (after all it works most of the time right?) it becomes common knowledge that everyone follows.
This bit is not correct. All of the most desirable items (except darkmoon trinkets and tier sets) drop directly from raid bosses, and people still complain about the outcomes of random loot.
if (item == tier2.druid.pants)
return (item = plate-wearer-loot());
Last thought, if there was a truely random number generator then why do fruitmachines have control systems to monitor the distribution and with that what is paid out to maintain that x% payout to comply with laws that say x% of money taken has to be payed out. Mostly 70-80%, but does vary and whilst the legal min may be say 70%, some casino's and the like will generaly have a higher % payout in a promenant place, just so people think it's even higher.
That's easy to answer -- to comply with rules that require a certain outcome some x percentage of the time, one need only take a truly random generator's output and filter it by x:
outcome = (rn * 100 <= x);
If rn is a float, lies between 0 and 1 and is truly random, then the above trivial filter will produce the required distribution in the long term.
Those frickin' salad shoulders.
For example, there are certain items that can give you a chance to blow a "critical hit", a critical hit multiplies your damage with X.
Say an item gives you 30% chance to do 2X damage. However, the randomness has memory and is designed to distribute the critical hits evenly by gradually increasing the probability for every non critical hit and resetting it for every critical hit. So the first time you hit the chance isn't actually 30% but more like 10%. If you miss 3 times the chance of the fourth hit being a critical is more like 40-50%. The hit after that will be back on 10% probability. (Just picking numbers out of the blue to illustrate the concept, I'm sure there's some more thought through math behind it)
As the nr of hits goes towards infinity, 30% of them will still be critical but the chance of getting streaks of non-critical or streaks of critical hits is very low.
In practice it works very well from my experience. It is however not completely exploit-proof, you could for example go and increase your probability of initially blowing a critical hit by first making a few hits on NPCs and then go to battle against a harder enemy. But that's more of a hypothetical exploit as the time and risk of doing this simply isn't worth those few extra percent.
And you know, it is always possible in those situations that there is a broken pseudo-random number generator involved. But of course, that's really hard to tell too because real randomness is so unrandom seeming.
Essentially, any random chance in the game of the form 1 to n could only ever be 1 to n-1, I think with n-1 being twice as likely.
This went unnoticed since most random rolls were over fairly large ranges and it didn't seem to hurt much, however it did explain why one particular subtype of loot literally never dropped.
But looking at the plot on the map, it appears that the higher incidence of bombs is focused on a specific region. The Poisson distribution doesn't account for the fact that a lot of the squares with a high incidence of bombs are adjacent to each other. From my layman's understanding, it appears that the bombs were in fact targeted on a specific area, but that there was a random offset from this area regarding where the bombs actually landed. Because of this, you'd see randomness in the distribution. But the distribution of bombs wasn't really perfectly random.
Is the author deliberately avoiding this point, or is there something I've misunderstood?
The poisson analysis is only done on a subset of the area thats in the 3d plots: "Clarke's analysis was focused on the central area of higher density here and with finer geographic coordinates. Within that area his analysis found no evidence of clustering that cannot be accounted for by a Poisson process."
In quantum mechanics, if you measure two incompatible observables (like position and momentum) of a system, and then repeat that experiment many times, you will get two lists of real numbers. QM says you can predict the distribution of these numbers, but you cannot predict the individual numbers themselves. The popular way of thinking nowadays is that "the universe is just inherently random".
So I posed the question on the Physics Stack Exchange: how do we know these numbers are truly random, and not the result of some as-yet-undiscovered pseudorandom number generator that is nonetheless deterministic? Luboš Motl (Czech string theorist) replied (a bit abrasively I might add) that yes, the numbers are truly random and plenty of experiments have ruled out the loopholes. Now, there's no way to determine if a set of numbers are truly random, so how he made this bold matter-of-fact statement is beyond me.
Einstein initially believed in "hidden variable" theories, undiscovered properties of quantum systems. Most of these have been ruled out by experiment (this is what Lubos mentioned), but really, this doesn't apply at all to my question of whether those numbers are random or not. Superdeterminism seems to still allow non-randomness, but for some reason, most physicists (notably excepting Gerard t'Hooft) have discounted superdeterminism as nonsense.
Maybe the issue is the Einstein-Podolsky-Rosen problem: if the numbers are being generated deterministically, they're somehow being communicated superluminally between entangled particles, which implies that in some relativistic frames of reference they're being communicated into the past? I guess I should learn enough about QM to really understand this stuff instead of guessing.
However, it is well known that any QM system can be simulated using a classical computer, with the penalty of exponential slowdown. Let's say that I have a hypothetical, ultra-powerful classical computer and I want to simulate a gigantic system of particles including aggregates of particles (e.g. people) performing measurements of other particles. When it comes time to determine the particular values for these measurements, I must generate a random number from a Gaussian distribution. So I use something like the Mersenne Twister. From the perspective of the simulated people, their observations would entirely match our own observations in studying a quantum system.
tl,dr: state isn't necessarily a one particle concept or a local concept. Individual particles have their own properties (spin, charge, etc.) and then maybe a collection of a million particles also has unique properties.
But my proposal is basically superdeterminism, which -- while being a loophole that has yet to be ruled out -- is unpopular. Since I'm not sure why, I guess I would need to get a degree in theoretical physics to find out.
(just paste the above into the address bar)
Are stars really plotted at random?
Up to the largest scales the universe is randomly clumped/stringed/voided.
Uniform dispersion would suggest some territoriality aspect of the species, and clumped dispersion would suggest a heterogeneity of resources (or any other hypothesis that could then be tested).
It'll output images that look like this: http://dave-gallagher.net/pics/666x666.png
from PIL import Image, ImageDraw
from random import randint
width = 666
height = 666
file_name = '/Users/Dave/%dx%d' % (width, height)
path_png = file_name + '.png'
path_jpg = file_name + '.jpg'
path_bmp = file_name + '.bmp'
path_tif = file_name + '.tif'
img = Image.new("RGB", (width, height), "#FFFFFF")
draw = ImageDraw.Draw(img)
for height_pixel in range(height):
if height_pixel % 100 is 0:
for width_pixel in range(width):
r = randint(0, 255)
g = randint(0, 255)
b = randint(0, 255)
dr = (randint(0, 255) - r) / 300.0
dg = (randint(0, 255) - g) / 300.0
db = (randint(0, 255) - b) / 300.0
r = r + dr
g = g + dg
b = b + db
draw.line((width_pixel, height_pixel, width_pixel, height_pixel), fill=(int(r), int(g), int(b)))
img.save(fp=path_jpg, format="JPEG", quality=95, subsampling=0) # 100 quality is 2x to 3x file size, but you won't see a difference visually.
if __name__ == "__main__":
Meanwhile German agents in England were also observing the success or failure of the V2s, and in particular where they'd landed.
However, in an effort to deceive the Germans, the British started reporting the correct time of successful attacks, while mentioning an incorrect location.
Moreover, a double agent called Eddie Chapman also fed false information back to the Germans.
As a result, the aimers never really got a grip on ranging accurately. The bombs started landing to the south east of London.
There's considerably more detail about this in Most Secret War by R V Jones, who was involved in all sorts of ruses to confuse the enemy. Well worth a read.
Eddie Chapman (Agent ZigZag) was played by Christopher Plummer in the film Triple Cross. It seems to be on YouTube.
So for all you entrepreneurs, if you fail, don't fall into a depression. Sure you worked just as hard as everyone else, maybe even harder. And yeah it's annoying to see others surpass you even though you've got everything they do. But that's life, you just got a bad batch of rolls.
edit: Hmm...another question that comes to mind: is the converse true? If the spread of values of these events do not match the poisson distribution, then can we presume them to be nonrandom? Or nonindependent? Or both?
So yes, for a Poisson process, the spread (standard deviation) is equal to the square root of the mean; as the number of events gets large, the Poisson distribution approaches the normal distribution, but the relationship between the standard deviation and the mean continues to hold.
A uniform "deck" a couple times larger than the number of pieces is the usual suggestion prevents large runs and makes sure that you see every piece more regularly.
on the other hand, here's "Evil Tetris": http://qntm.org/hatetris
I guess I'm not a good randomness detective.
With that it gets hard to truely say what is random or what is a as yet unknown pattern. This is why many have taken the approach of not having a single source of random numbers but use many and average out from there. There again is that random as the chances with such an approach of getting a high or low value would be biased out.
So with that I postulate one mans random string is another mans non-random string. So with that I define randomness as a yet undertermind sequence or a data. So the included Dilbert post is with that extreemly clever and totaly true.
No, it's just saying that it's much easier to get pseudorandomness out of a computer than true randomness.
"This is why many have taken the approach of not having a single source of random numbers but use many and average out from there. There again is that random as the chances with such an approach of getting a high or low value would be biased out."
Technically speaking you wouldn't average you'd add them together and take the decimal portion (modulo 1). That can negate bias as long as one of the sources is good even if you don't know which one.
Of course you can remove bias from a single source by Von Neumann's method although this might be computationally harder than the above:
No, a uniform distribution is not evidence of randomness. Consider the digits 0 - 9 repeated endlessly:
Uniformly distributed? Yes. Random? No.
> What is this notion of "pure randomness" that this article and many comments seem to be eluding [sic] to?
Second, although the topic is complex, one test of randomness is that an ideal compression method, one able to find and exploit any repetitive pattern, cannot compress a random sequence.
Third, the term "entropy" as used in information theory is tied to randomness, as explained here:
A quote: "The entropy rate for a [fair coin] toss is one bit per toss. However, if the coin is not fair, then the uncertainty, and hence the entropy rate, is lower."
Based on that, high entropy -> high randomness.
Not to oversimplify a complex topic.
I think you're fudging what is supposed to be uniform here. In your example, the unigrams (i.e. single digits) may be uniform, but the n-grams for n > 2 are not.
The glow worms are still distributed randomly in the mathematical sense, but knowing the position of one tells you something about the likely positions of others, so they are not independent.
Common parlance doesn't do a great job at talking about the features of random distributions, but when people say "purely random," they often seem to mean "uniformly distributed and independent." Both pictures have uniformly distributed points, but the glow worms are not independent.
Now if we take that same kill data and instead ask how many times do I get item type 2 in 100 kills you'll see a Poisson distribution. If you didn't see a Poisson distribution when asking that question then the events were probably not independent of each other, and hence weren't actually random (e.g. The monster could just drop the 50 items types in order).