Moment Generating Function and Probability Generating Function

Moment Generating Function(mgf) and Probability Generating Function(pgf) are useful techniques in Probability Theorem. As Loss Model studies a lot about probability, mgf and pgf are necessary techniques. So I post some stuffs about them.

The definition of Moment Generating Function(Univariate Case) is

M_{X}(t) = E[e^{tX}] = \int_{-\infty}^{\infty}e^{tx}f(x)\mathrm{d}x

More generally, if X=(X_{1}, X_{2}, \dots, X_{n})^{T}, we use t^{T}X instead of tX:

M_{X}(t) = E[e^{t^{T}X}]

The definition of mgf seems it will be complicated, but why defining it like that? According to Wikipedia, defining that way can be used to find all the moments of the distribution. Employing Taylor's Series to expand e^{tx}, we have that

e^{tX} = 1 + tX + \frac{t^{2}X^{2}}{2!} + \frac{t^{3}X^{3}}{3!}+\cdots+\frac{t^{n}X^{n}}{n!}+\cdots

Such that

M_{X}(t) = 1 + tE[X]+ \frac{t^{2}E[X^{2}]}{2!} + \frac{t^{3}E[X^{3}]}{3!} + \cdots + \frac{t^{n}E[X^{n}]}{n!}+\cdots

It is straightforward to differentiate M_{X}(t) n times with respect to t and setting t =0 to get E[X^{n}].

And if X_{1}, X_{2}, \dots, X_{n} is sequence of independent random variables, and S_{n} = \sum\limits_{i=1}^{n}a_{i}X_{i}. The mgf of S_{n} is

M_{S_{n}}(t) = M_{X_{1}}(a_{1}t)M_{X_{2}}(a_{2}t)\cdots M_{X_{n}}(a_{n}t)

It is notable to remind that some distributions have no mgf because in some case \lim\limits_{n\rightarrow\infty}\sum\limits_{i=0}^{n}\frac{t^{i}E[X^{i}]}{i!} is not exist. For example, lognormal distribution.

For pgf , the definition is here:

G(z) = E[z^{X}]

. If we do a little bit transformation, we could drive our car to mgf:

G(e^{t}) = E[e^{tX}] = M_{X}(t)

When I reading the instruction of pgf on Wikipeida, it sounds like pgf is more appropriate for discrete random variable, but I don't have any evidence.

For Univariate case, a more detailed pgf definition is here:

G(z) = E(z^{X}) = \sum\limits_{x=0}^{\infty}p(x)z^{x}

And for Multivariate case, the definition is here:

G(z) = G(z_{1},\dots,z_{d}) = E(z_{1}^{X_{1}}\cdots z_{d}^{X_{d}}) = \sum\limits_{x_{1},\dots,x_{d}=0}^{\infty}p(x_{1},\dots,x_{d})z_{1}^{X_{1}}\cdots z_{d}^{X_{d}}

From its definition, it is obviously a power series, which guarantees that |z|\leq 1 will make the power series converged. If we setting z = 1^{-}, we could get that

E(\frac{X!}{(X-k)!}) = G^{k}(1^{-}),\ k \geq 0

And if X_{1}, X_{2}, \dots, X_{n} is sequence of independent random variables, and S_{n} = \sum\limits_{i=1}^{n}a_{i}X_{i}. The pgf of S_{n} is

G_{S_{n}}(z) = G_{X_{1}}(z)G_{X_{2}}(z)\cdots G_{X_{n}}(z)

And particularly, if S = X_{1}-X_{2}, we have

G_{S}(z) = G_{X_{1}}(z)G_{X_{2}}(1/z)

Note: All the materials of this post comes from wikipedia.org, you could check it out if you want something more detailed.

发表评论?

0 条评论。

发表评论


注意 - 你可以用以下 HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>