Introduction to the Mathematical and Statistical Foundations of Econometrics
Borel Measurability, Integration, and Mathematical Expectations
Consider the following situation: You are sitting in a bar next to a guy who proposes to play the following game. He will roll dice and pay you a dollar per dot. However, you have to pay him an amount y up front each time he rolls the dice. Which amount y should you pay him in order for both of you to have equal success if this game is played indefinitely?
Let X be the amount you win in a single play. Then in the long run you will receive X = 1 dollars in 1 out of 6 times, X = 2 dollars in 1 out of 6 times, up to X = 6 dollars in 1 out of 6 times. Thus, on average you will receive (1 + 2 + + 6)/6 = 3.5 dollars per game; hence, the answer is y = 3.5.
Clearly, X is a random variable: X(a) = Y^j=i j ' I(ш є U}), where here, and in the sequel, I( ) denotes the indicator function:
I (true) = 1, I (false) = 0.
This random variable is defined on the probability space {^, &, P}, where ^ = {1, 2, 3, 4, 5, 6}, .‘X is the a-algebra of all subsets of Й, and P({a}) = 1/6 for each а є ^. Moreover, y = ^=1 j/6 = =1 jP({j}). This amount
y is called the mathematical expectation of X and is denoted by E(X).
More generally, if X is the outcome of a game with payoff function g(X), where X is discrete: pj = P[X = xj ] > 0 with YTj=1 Pj = 1 (n is possibly infinite), and if this game is repeated indefinitely, then the average payoff will be
j
у = E [g( X)] = £ g(Xj )pj. (2.1)
j=1
Some computer programming languages, such as Fortran, Visual Basic, C++, and so on, have a built-in function that generates uniformly distributed random numbers between zero and one. Now suppose that the guy next to you at the bar pulls out his laptop computer and proposes to generate random numbers
and pay you X dollar per game if the random number involved is X provided you pay him an amount y up front each time. The question is again, Which amount y should you pay him for both of you to play even if this game is played indefinitely?
Because the random variable X involved is uniformly distributed on [0, 1], it has distribution function F(x) = 0 for x < 0, F(x) = x for 0 < x < 1, F(x) = 1 for x > 1 with density function f (x) = F'(x) = I(0 < x < 1). More formally, X = X(ш) = ш is a nonnegative random variable defined on the probability space {^, .‘X, P}, where ^ = [0, 1], ‘ = [0, 1] П B, that is, the a-algebra of all Borel sets in [0, 1], and P is the Lebesgue measure on [0, 1].
To determine y in this case, let
Y^bj-11 (ш Є (bj-1, bj]),
j=1
where bo = 0 and bm = 1. Clearly, 0 < X+ < X with probability 1, and, as is true for the dice game, the amount y involved will be greater than or equal to Y^m=1 bj-1P((bj-1, bj]) = Yl"m=1 bj-1(bj — bj-1). Taking the supremum over all possible partitions Unm=1(bj-1, bj ] of (0, 1] then yields the integral
More generally, if X is the outcome of a game with payoff function g(X), where X has an absolutely continuous distribution with density f (x), then
TO
У = E [g(X)] = j g(x)f (x)dx.
-TO
Now two questions arise. First, under what conditions is g(X) a well-defined random variable? Second, how do we determine E(X) if the distribution of X is neither discrete nor absolutely continuous?