TUCoPS :: Scams :: fairball.txt

Law of Large Numbers - this'll make you think twice about how you pick lotto numbers!


FAIR COIN AND LARGE NUMBER THEORY
> *** FAIR COIN TOSSING QUESTION ??? ***
> If I toss a fair coin 1000 times and the fist 700 tosses reveal
> 400 heads and 300 tails what would I expect from the last 300 tosses ?
>
> a) approx 150 heads + 150 tails
>  bringing total heads to 550 and total tails to 450 ?
>
> b) approx 100 heads + 200 tails
>  bringing total heads to 500 and total tails to 500 ?

"Fair coin" usually means that the chance of heads is equal to the chance
of tails.  Thus (a) is [roughly] correct -- certainly (b) is wrong.  If
you assume that the results of the first 700 indicate that the coin may
NOT be fair, then all bets are off.

Following is some material on the "Law of Averages" that I posted on
rec.gambling a while ago; it might be useful to you.

--
FAIR COIN AND LARGE NUMBER THEORY
In <2pjcl9$1ut@champ.ksu.ksu.edu>, mirage@champ.ksu.ksu.edu
(Eric L Sipe) asks about the Law of Large Numbers:

> where and how do the theory of (wheel/dice/coin/etc.) have no memory
> and the Law of Large Numbers converge.
>
> The Law of Large Numbers, as I understand it, says that as the
> sample size of outcomes becomes larger, the measured number of
> outcomes come closer to what is expected by the theoretical
> probability.

"Number" should be replaced by "proportion," as in fact Eric
realizes:

> You can see this by making a graph.  Make the x-axis the number of
> trials or outcomes.  If you are flipping a coin, make the y-axis a
> measure of how many heads or tails have been flipped out of the
> total. Let's say you are keeping track of heads, and the following
> sequence occurs:  HTTHH
> You would plot 1.0, 0.5, 0.33, 0.5, 0.6 vs 1,2,3,4,5.
> Anyway, this graph shows that you start out with a jagged line that
> eventually smooths down to a straight line at the expected
> probability.  My question has always been:  at how many outcomes can
> you expect the line to smooth out??

> So let's say that a new casino opens up.  For simplicity, we'll say
> that the roulette wheel has no green.  So, the theoretical
> probability of landing on red is 0.5.  But the first 1000000 spins
> at this casino all land on red.  (the probability of this for all
> practicality is zero, but _theoretically_ it might happen).  Now,
> what is the best strategy for betting at this wheel (assuming it can
> be proven that there is no bias in the wheel).  ... what about those
> who say that there is no optimum strategy-- that your chances are
> still 50/50???  This is where I start to disagree.  Why not stick
> around and bet a small amount on black for the next 1000000 spins?
> The Law of Large Numbers would seem to indicate that at least the
> majority of the next 1000000 spins would be black.

The Law of Large Numbers (aka "The Law of Averages," especially when
misapplied) does not so indicate.  Let's start at the beginning...

The theory of statistics and probability developed as a framework
for handling the uncertainty inherent in measuring only a sample of
a population in an attempt to estimate the true values applicable to
the entire population.  The theory is applicable where the true
value is uncertain because it is not practical to measure each
individual comprising the population.  The impracticality arises
from contraints on observation due to limited resources (e.g.,
polling) or the finitude of time (e.g., roulette outcomes).  A
statistical analysis always provides, with any statement of true or
expected value, an estimate of the error of the stated value.

Sometimes, in probabilistic analysis, the error estimate is omitted
and replaced with an assumption that the population is infinite.
This is the case when we say, for example, that the expected loss on
a double-zero roulette wheel wager is exactly 2/38 of the bet.  But
this is just a kind of shorthand which expands to a statement that
the error is zero when the population is infinite.  Underlying the
whole analytical enterprise is the assumption that the outcome of
any future spin or series of spins of the wheel is uncertain.

What the Law of Large Numbers says is that the larger the sample
size, the higher the probability that a collective measure of the
sample will fall within any predetermined range around the true
population collective value.  (Examples of "collective" values would
be the average height of a group of people and the proportion of red
in a series of roulette spins.)  In short, the larger the number of
observations, the smaller the error of the estimate.

Notice in the above statement of the law I said "the higher the
probability" that the sample result (e.g., mean or proportion) will
lie within some range of the true value, not "the closer the sample
proportion will be" to the true value.  We can use the law to talk
about probabilities of random future results because the law is a
concise statement of the nature of the uncertainty inherent in a
random process.  We cannot use the law to remove uncertainty without
contradicting its premises.

The contention that the law implies that a past series of red results
makes future black results more probable is based on the following
argument:

Premises:

(1) Red and black are equally probable (let's ignore green for
simplicity), i.e., the true population proportion is 50%.

(2) According to the Law of Large Numbers, the more spins of the
wheel, the higher the probability that the observed proportion will
lie within N% of 50% for any N.

(3) We have observed a series of X spins in which the result was red
each time.

(4) We propose to observe Y future spins.  Per (2), there is a
higher probability that the proportion of black in X+Y spins will be
close (for any given specification of "close") to 50% than it will
be for X spins.

Conclusion:

For the next Y spins, black is more probable than red.

Not only does the conclusion not follow from the premises, it
contradicts the primary one.

(Requoting...)
> Anyway, this graph shows that you start out with a jagged line that
> eventually smooths down to a straight line at the expected
> probability.

The further to the right you go on the graph, the more the chance
that the y value will lie close to 0.5.  But it is possible, for any
given x value, for the y value to lie anywhere between 0 and 1
(assuming that we have not yet made any observations). Both of these
statements are simply reformulations of the assumed nature of the
wheel:  any given spin can result in either red or black, and the
probability of either is 0.5.  No valid argument from those premises
can contradict them.

> My question has always been:  at how many outcomes can you expect
> the line to smooth out??

This red/black process has results which are described by the
binomial distribution, which is a variety of Gaussian distribution.
For any given number of observations N, if we plot the number of
deviations from the expected value on the horizontal axis, and the
probability of that number of deviations on the vertical axis, we
get the familiar "bell-shaped" function, very roughly thus:

p                    *
r                  *   *
o                 *     *
b               *         *
.            *               *
          *                     *
fewer than expected  0  more than expected
      nbr of deviations from expected

The peak in the center occurs at x = 0 and y = 0.798 divided by the
square root of N.  Thus:

      Number of observations         Chance that red/black EXACTLY
                                            equals 1.0

             10                               25.2%
            100                                8.0%
          1,000                                2.5%
         10,000                                0.8%
        100,000                                0.3%
      1,000,000                                0.1%
     10,000,000                                0.03%
    100,000,000                                0.008%

The standard deviation of the distribution is half the square root
of N. Thus there is about a 95% chance that the number of excess
reds or blacks will lie within the square root of N (two standard
deviations).  This implies:

      Number of observations       95% chance that proportion is
                                   within, or 5% chance outside of:

             10                             0.18 - 0.82
            100                             0.40 - 0.60
          1,000                             0.47 - 0.53
         10,000                             0.49 - 0.51
        100,000                            0.497 - 0.503
      1,000,000                            0.499 - 0.501
     10,000,000                           0.4997 - 0.5003
    100,000,000                           0.4999 - 0.5001


SUMMARY
-------
Given a statement about the uncertainty of an individual outcome,
the Law of Large Numbers is an extrapolation to a statement about
the uncertainty of the net result of a number of outcomes.  Use of
the law implies that the statement about the nature of an individual
outcome remains true.

If a future spin of the wheel will result in either red or black,
but we have no information as to which, then we can make certain
statements based on the Law of Large Numbers about probabilities
for a collection of such spins.

Use of the law to argue that an outcome is "due" as a result of a
deficiency of that outcome in the past is to take the law outside
its domain of discourse and to assume the contrary of the premise
underlying the law.  The addition of past, known results to a number
of future results does not make the latter number "larger" in the
sense relevant to the Law of Large Numbers.  The law and probability
theory in general do not speak to past or known results, but to
future uncertain results.

EXERCISES
---------
(1) You observe that 100 consecutive spins of the red/black wheel
come up red.  Based on your interpretation of the "Law of Averages",
you are about place a series of bets on black.  Just then you
discover that the wheel on the next table has just completed 100
spins all of which came up black.  Do you still make the bets?  Do
you also bet on red on that other wheel?  Do the two wheels
together cancel each other out to satisfy the "Law of Averages"?
Or is the "Law of Averages" applicable here at all?

(2) You observe that for 100 spins of the wheel, red and black
alternated precisely, so that each odd-numbered spin (1st, 3rd, ...
97th, 99th) came up red and each even-numbered spin came up black.
If asked beforehand the probability of this exact result, you would
presumably have said that it was very small.  Assuming that you are
confident that there is no physical wheel or dealer anomaly to
explain the results, i.e., that the wheel remains unbiased, do you
now bet on red with any confidence?

(3) [Multiple-choice]  You are playing double-zero roulette and
experience a consecutive string of ten reds.   The dealer exclaims,
"Wow!  We just had ten reds in a row!  What's the probability of
THAT?"  You answer:
    (a) 0.00056871
    (b) 1.00000000
    (c) Cocktails!

--
sbrecher@connectus.com (Steve Brecher)



TUCoPS is optimized to look best in Firefox® on a widescreen monitor (1440x900 or better).
Site design & layout copyright © 1986-2024 AOH