Random
Let's say you have a large set of truly random, widely distributed numbers. What portion of them, if you counted it up, would start with the digit 1? How about 2? 3? Most people intuitively say it's 1 in 9, an equal share of probability, for each one.
In reality, this isn't what ends up happening. In a large random sampling of numbers, each has roughly a 30% chance of starting with 1. (Or, another way to put it: you can expect 30% of them to start with a 1. About 17% of them will start with a 2. The percentages decrease progressively as you work your way up to 9.
It's called Benford's law. Among other things, it's used by the IRS to analyze tax returns. If someone fudges the numbers on a corporate tax return, you would expect to see more deviation from the first digit distribution predicted by Benford's law, so those returns are more likely to get an audit. Oops.
A professor of statistics conducted an experiment with her students once. She stepped out of the room and had two groups of students flip a coin 100 times. The first group used an actual coin, and wrote, in order, whether they got heads or tails with each toss of the coin.
The second group was instructed to toss a "mental" coin. One member of the group randomly called out "heads" or "tails" 100 times. Just like the first group, they tallied up their results.
One last thing: the professor didn't know which group was doing which.
When the professor returned, she looked at both of the tallies from both the real coin and the mental coin, and quickly made a determination about which group had flipped an actual coin. She did this several times, and was always able to spot the real coin right away.
How did she do this? Because, according to her, 100 actual coin flips tend to contain runs of six or seven heads in a row (or tails in a row). In the mental coin flipping group, the results tended to go back and forth from heads to tails with an eerie consistency.
We don't understand true randomness because it doesn't always feel right, and it doesn't always look the way we would expect it to.
In reality, this isn't what ends up happening. In a large random sampling of numbers, each has roughly a 30% chance of starting with 1. (Or, another way to put it: you can expect 30% of them to start with a 1. About 17% of them will start with a 2. The percentages decrease progressively as you work your way up to 9.
It's called Benford's law. Among other things, it's used by the IRS to analyze tax returns. If someone fudges the numbers on a corporate tax return, you would expect to see more deviation from the first digit distribution predicted by Benford's law, so those returns are more likely to get an audit. Oops.
A professor of statistics conducted an experiment with her students once. She stepped out of the room and had two groups of students flip a coin 100 times. The first group used an actual coin, and wrote, in order, whether they got heads or tails with each toss of the coin.
The second group was instructed to toss a "mental" coin. One member of the group randomly called out "heads" or "tails" 100 times. Just like the first group, they tallied up their results.
One last thing: the professor didn't know which group was doing which.
When the professor returned, she looked at both of the tallies from both the real coin and the mental coin, and quickly made a determination about which group had flipped an actual coin. She did this several times, and was always able to spot the real coin right away.
How did she do this? Because, according to her, 100 actual coin flips tend to contain runs of six or seven heads in a row (or tails in a row). In the mental coin flipping group, the results tended to go back and forth from heads to tails with an eerie consistency.
We don't understand true randomness because it doesn't always feel right, and it doesn't always look the way we would expect it to.