Probability is our attempt at making sense of random events in our rather random world. The simplest example—a coin toss—will surprisingly motivate many topics that we will formulate with greater and greater generality.
Flip a coin. You get point if ‘Head’ comes up, and nothing if ‘Tail’ comes up. You start with a score of
. After one flip, what is your score? Well, it depends. Does the coin land ‘Head’? Does it land ‘Tail’? It could go either way. How do we model this phenomenon. Well, there are
outcomes, and if they are equally likely, then we can say that each outcome has a probability of
.
In mathematical notation, we let the sample space denote the set of possible outcomes after one flip of the coin. What are the different events that we can measure? Surely we have the non-negative probabilities
But more is true. What is the probability that we get either a ‘Head’ or a ‘Tail’? In the real would it could be the case that a coin lands on its edge, but in a mathematical world, we preclude such a possibility. This means that there are only two possible outcomes: a ‘Head’ or a ‘Tail’, and they don’t overlap. At least one of them must be true! Hence, it makes sense to assert
This discussion almost seems trivial, until we make one modification. Now flip the coin two times. You get point if ‘Head’ comes up, and nothing if ‘Tail’ comes up. You start with a score of
. After two flips, what is your score?
Let’s list down the different possible outcomes:
How do we assign their probabilities? And furthermore, how do we determine our score? Assuming no funny business, we would think that all outcomes are equally likely. Once again, at least one outcome should hold, so we stipulate . Furthermore, for any two distinct outcomes
, we stipulate the event that either one occurs as their sum:
We can now formalise probability theory for finite sample spaces; the infinite case deserves a much, much longer elaboration.
Definition 1. Let be a finite sample space and
denote its power set (i.e. the collection of all of its possible subsets). We call the map
a finite measure on
if
for distinct outcomes
Additionally, if , we call
a probability measure, and commonly denote it by
.
Lemma 1. For any finite sample space and finite measure
with
, there exists a probability measure
.
Proof. Define and verify Definition 1: for distinct outcomes
,
Hence, we can discuss most ideas using the more general and particularise to
whenever
.
Theorem 1. If is finite and there exists
such that for any outcome
,
, then
. We call
the uniform measure on
. In the case
, we call
the counting measure, since
.
Proof. Denoting ,
Therefore, .
In particular, for any ,
so that we can regard
as the counting measure.
Corollary 1. Under the conditions of Theorem 1, if is a uniform probability measure, then for any
,
.
Proof. By Theorem 1, for any ,
Hence,
In particular,
Let’s return to the situation
Recall that our goal is to determine the score after
coin flips. There are three possible scores:
. Here’s a simple question: what is the probability that we end up with a score of
? Well, we notice that we can interpret the score
as a function
defined as follows:
This means that the outcomes yield
. Using inverse-image notation,
Well then, what do we mean by ? Intuitively, since there are
outcomes with equal probability, we should have a combined probability of
. The probability is derived from the subset
of desired outcomes:
Definition 2. Let be a finite sample space and
be a measure on
. We call
a random variable on
. It is clear that
is finite.
Theorem 2. As per the notation in Definition 2, denote . Then the map
defined by
for any
is a measure on
. We abbreviate
and call it the push-forward measure of
under
.
Proof. For any ,
Before continuing, you might wonder: what’s with the rather complicated terminology? Well, it’s to tease you on the key vocabulary that one cannot escape when studying measure theory and probability.
In fact, there is even a simpler question that isn’t entirely obvious, given a set , how do we compute
? This is where we need to discuss some basic combinatorics.
—Joel Kindiak, 22 Jun 25, 1249H
Leave a comment