Multivariate Normal Bookkeeping

For random variables X_1,\dots, X_n, denote \mathbf X := (X_1,\dots, X_n)^{\mathrm T} and define

\mathbb E[\mathbf X] = (\mathbb E[X_1], \dots, \mathbb E[X_n])^{\mathrm T}.

Define the covariance matrix \boldsymbol{ \Sigma }_{\mathbf X} \in \mathcal M_{n \times n}( \mathbb R ) by

\displaystyle (\boldsymbol{ \Sigma }_{\mathbf X})_{i,j} = \mathrm{Cov}(X_i, X_j).

Note that \boldsymbol{ \Sigma }_{\mathbf X} = \mathbb E[(\mathbf X - \mathbb E[\mathbf X]) (\mathbf X - \mathbb E[\mathbf X])^{\mathrm T}].

Problem 1. For i.i.d. Z_1,\dots, Z_n \sim \mathcal N(0, 1), define \mathbf Z := (Z_1,\dots, Z_n)^{\mathrm T}. Calculate the p.d.f. f_{\mathbf Z} of \mathbf Z and evaluate \boldsymbol{ \Sigma }_{\mathbf Z}.

(Click for Solution)

Solution. For each I, the p.d.f. f_{Z_i} of Z_i is

\displaystyle f_{Z_i}(z_i) = \frac{1}{\sqrt{2\pi}} e^{-\frac{z_i^2}{2}}.

Since Z_1,\dots, Z_n are independent,

\begin{aligned} f_{\mathbf Z}(\mathbf z) &= \prod_{i=1}^n f_{Z_i}(z_i) = \frac 1{ \sqrt{ (2\pi)^{n} } } e^{-\frac 12 \sum_{i=1}^n z_i^2} = \frac 1{ \sqrt{ (2\pi)^{n} } } e^{-\frac 12 \mathbf z^{\mathrm T} \mathbf z}.\end{aligned}

Furthermore,

{ ( \boldsymbol{ \Sigma }_{\mathbf Z} )}_{i,j} =  \mathrm{Cov}(Z_i, Z_j)  = \mathbb I_{\{0\}} (i-j) = \mathbf I_{i,j},

where \mathbf I \equiv \mathbf I_n denotes the n \times n identity matrix.

Problem 2. For any invertible matrix \mathbf A \in \mathcal M_{n \times n}(\mathbb R) and \boldsymbol{\mu} \in \mathbb R^n, define \mathbf X = \boldsymbol{\mu} + \mathbf A \mathbf Z . Calculate the p.d.f. f_{\mathbf X} of \mathbf X and evaluate \boldsymbol{ \Sigma }_{\mathbf X}.

(Click for Solution)

Solution. We first assume the case \boldsymbol{ \mu } = \mathbf 0. Define the bijective and continuous map. We observe that for K \in \frak{B}(\mathbb R^n),

\mathbb P_{\mathbf X}(K) = \mathbb P(\mathbf X \in K) = \mathbb P(\mathbf Z \in \mathbf A^{-1}(K)) = (\mathbb P_{\mathbf Z} \circ \mathbf A^{-1})(K).

That is, \mathbb P_{\mathbf X} = \mathbb P_{\mathbf Z} \circ \mathbf A^{-1} is the pushforward measure of \mathbb P_{\mathbf Z} under g. Letting \lambda denote the n-dimensional Lebesgue measure, we leave it as an exercise in linear algebra to verify that

\displaystyle \frac{ \mathrm d\lambda(\mathbf A^{-1}\mathbf x) }{ \mathrm d\lambda(\mathbf x) } = |\mathbf A^{-1}| = \frac{1}{|\mathbf A|}.

By a change of variables, for any K \in \frak{B}(\mathbb R^n),

\begin{aligned} \mathbb P_{\mathbf X} (K) &= \int_{\mathbb R^n} \mathbb I_K\, \mathrm d\mathbb P_{\mathbf X} \\ &= \int_{\mathbb R^n} \mathbb I_K\, \mathrm d\mathbb P_{\mathbf X} \\ &= \int_{\mathbb R^n} \mathbb I_K \circ \mathbf A \, \mathrm d\mathbb P_{\mathbf Z} \\ &= \int_{\mathbb R^n} \mathbb I_K( \mathbf A\mathbf z ) \cdot f_{\mathbf Z}(\mathbf z) \, \mathrm d\lambda(\mathbf z) \\  &= \int_{\mathbb R^n} \mathbb I_K(\mathbf x) \cdot f_{\mathbf Z}(\mathbf A^{-1} \mathbf x ) \, \mathrm d\lambda(\mathbf A^{-1} \mathbf x ) \\ &= \int_{\mathbb R^n} \mathbb I_K(\mathbf x) \cdot f_{\mathbf Z}(\mathbf A^{-1} \mathbf x ) \cdot \frac{1}{|\mathbf A|}\, \mathrm d\lambda(\mathbf x) \\ &= \int_{K} f_{\mathbf Z}(\mathbf A^{-1} \mathbf x ) \cdot \frac{1}{|\mathbf A|}\, \mathrm d\lambda(\mathbf x).\end{aligned}

By the Radon-Nikodým theorem,

\displaystyle \int_K f_{\mathbf X}(\mathbf x)\, \mathrm d\lambda(\mathbf x) = \mathbb P_{\mathbf X} (K) = \int_{K} f_{\mathbf Z}(\mathbf A^{-1} \mathbf x ) \cdot \frac{1}{|\mathbf A|}\, \mathrm d\lambda(\mathbf x),

so that

\begin{aligned} f_{\mathbf X}(\mathbf x) &= \frac{1}{|\mathbf A|} \cdot  f_{\mathbf Z}(\mathbf A^{-1}\mathbf x) \\ &= \frac 1{ |\mathbf A| \cdot \sqrt{ (2\pi)^{n} } } \cdot e^{-\frac 12 \mathbf x^{\mathrm T} (\mathbf A\mathbf A^{\mathrm T})^{-1} \mathbf x } \\ &= \frac 1{ \sqrt{ (2\pi)^{n} \cdot |\mathbf A \mathbf A^{\mathrm T}| } } \cdot e^{-\frac 12 \mathbf x^{\mathrm T} (\mathbf A\mathbf A^{\mathrm T})^{-1} \mathbf x }.\end{aligned}

Furthermore,

\begin{aligned} \boldsymbol{\Sigma}_{\mathbf X} &= \mathbb E[(\mathbf A \mathbf Z - \mathbb E[\mathbf A \mathbf Z]) (\mathbf A \mathbf Z - \mathbb E[\mathbf A \mathbf Z])^{\mathrm T}] \\ &= \mathbb E[\mathbf A (\mathbf Z - \mathbb E[\mathbf Z]) (\mathbf Z - \mathbb E[\mathbf Z])^{\mathrm T}\mathbf A^{\mathrm T} ] \\ &= \mathbf A \mathbb E[(\mathbf Z - \mathbb E[\mathbf Z]) (\mathbf Z - \mathbb E[\mathbf Z])^{\mathrm T} ]\mathbf A^{\mathrm T} \\ &= \mathbf A \boldsymbol{\Sigma}_{\mathbf Z} \mathbf A^{\mathrm T} = \mathbf A  \mathbf A^{\mathrm T} . \end{aligned}

In the marginally more general case, \mathbf X - \boldsymbol{\mu} = \mathbf A \mathbf Z. We leave it as an exercise to verify that \mathbb E[\mathbf X] = \boldsymbol{ \mu } and \boldsymbol{ \Sigma }_{\mathbf X} = \boldsymbol{ \Sigma }_{\mathbf A \mathbf Z} = \mathbf A\mathbf A^{\mathrm T}, so that finally

\begin{aligned} f_{\mathbf X}(\mathbf x) = f_{\mathbf A \mathbf Z}(\mathbf x - \boldsymbol{\mu}) &= \frac 1{ \sqrt{ (2\pi)^{n} \cdot |\boldsymbol{ \Sigma }_{\mathbf A \mathbf Z}| } }  \cdot e^{-\frac 12 (\mathbf x - \boldsymbol{\mu})^{\mathrm T} \boldsymbol{ \Sigma }_{\mathbf A \mathbf Z}^{-1} (\mathbf x - \boldsymbol{\mu}) }. \end{aligned}

Definition 1. A random variable \mathbf X is multivariate normal, denoted \mathbf X \sim \mathcal N(\boldsymbol{ \mu }, \boldsymbol{ \Sigma }), if it has a p.d.f. f_{\mathbf X} given by

\displaystyle f_{\mathbf X}(\mathbf x) = \frac{1}{ \sqrt{ (2\pi)^n \cdot |\boldsymbol{ \Sigma } | } } \cdot e^{-\frac 12 (\mathbf x - \boldsymbol{\mu})^{\mathrm T} \boldsymbol{\Sigma}^{-1}  (\mathbf x - \boldsymbol{\mu})}.

In this case, we say that the mean of \mathbf X is \mathbb E[ \mathbf X ] = \boldsymbol{\mu}, and the covariance of \mathbf X is \boldsymbol{\Sigma}_{\mathbf{X}} = \boldsymbol{\Sigma}. For example, \mathbf Z \sim \mathcal N(\mathbf 0, \mathbf I_n).

Problem 3. Fix \mathbf X \sim \mathcal N(\boldsymbol{ \mu }, \boldsymbol{\Sigma} ).

  • Prove that if \boldsymbol{\Sigma} is diagonal whose diagonal entires are positive \sigma_i^2 > 0, then X_1,\dots, X_n are independent with X_i \sim \mathcal N( \mu_i, \sigma_i^2 ).
  • Prove that for any invertible \mathbf A \in \mathcal M_{n \times n}(\mathbb R) and \boldsymbol{ \nu } \in \mathbb R^n, \mathbf Y := \boldsymbol{\nu} + \mathbf A \mathbf X \sim \mathcal N(\boldsymbol{ \mu }_{\mathbf Y}, \boldsymbol{ \Sigma }_{\mathbf Y}) for constants \boldsymbol{ \mu }_{\mathbf Y}, \boldsymbol{ \Sigma }_{\mathbf Y} to be calculated.
(Click for Solution)

Solution. For the first point, if \boldsymbol{\Sigma} is diagonal with \boldsymbol{ \Sigma }_{i,i} = \sigma_i^2 > 0, then \boldsymbol{\Sigma}^{-1} is diagonal with \boldsymbol{ \Sigma }_{i,i} = 1/\sigma_i^2 > 0, and

\displaystyle (\mathbf x - \boldsymbol{\mu})^{\mathrm T} \boldsymbol{\Sigma}^{-1} (\mathbf x - \boldsymbol{\mu}) = \sum_{i=1}^n \frac{ (x_i - \mu_i)^2 }{ \sigma_i^2 }.

Some algebra then yields

\begin{aligned} f_{\mathbf X}(\mathbf x) &= \frac{1}{ \sqrt{ (2\pi)^n \cdot |\boldsymbol{ \Sigma } | } } \cdot  e^{-\frac 12 (\mathbf x - \boldsymbol{\mu})^{\mathrm T} \boldsymbol{\Sigma}^{-1}  (\mathbf x - \boldsymbol{\mu})} \\ &= \prod_{i=1}^n \frac{1}{\sigma_i \sqrt{ 2\pi } } \cdot e^{-\frac 12 \sum_{i=1}^n \frac{ (x_i - \mu_i)^2 }{ \sigma_i^2 } } \\ &= \prod_{i=1}^n \frac{1}{\sigma_i \sqrt{ 2\pi } } \cdot \prod_{i=1}^n  e^{-\frac{ (x_i - \mu_i)^2 }{ 2\sigma_i^2 } } \\ &= \prod_{i=1}^n \frac{1}{\sigma_i \sqrt{ 2\pi } } e^{-\frac{ (x_i - \mu_i)^2 }{ 2\sigma_i^2 } } = \prod_{i=1}^n f_{X_i}(x_i). \end{aligned}

Therefore, X_1,\dots, X_n are independent with X_i \sim \mathcal N(\mu_i , \sigma_i^2).

For the second point, follow the proof in Problem 2 with needful bookkeeping to obtain

\begin{aligned} f_{\mathbf Y}(\mathbf y) &= f_{\mathbf A\mathbf X}(\mathbf y - \boldsymbol{\nu}) = \frac 1{|\mathbf A|} \cdot f_{\mathbf X}(\mathbf A^{-1}(\mathbf y - \boldsymbol{\nu})), \end{aligned}

which simplifies to Definition 1 with \mathbb E[\mathbf Y] = \boldsymbol{\nu} + \mathbf A \boldsymbol{\mu} and \boldsymbol{\Sigma}_{\mathbf Y} =   \mathbf A \boldsymbol{\Sigma}_{\mathbf X}  \mathbf A^{\mathrm T}.

Problem 4. Let \mathbf X \sim \mathcal N(\boldsymbol{ \mu } , \boldsymbol{ \Sigma }). Prove that there exists an invertible matrix \mathbf A \in \mathcal M_{n \times n}(\mathbb R) such that \mathbf X = \boldsymbol{ \mu } + \mathbf A \mathbf Z . In particular, each X_i is normally distributed.

(Click for Solution)

Solution. By definition,

\displaystyle f_{\mathbf X}(\mathbf x) = \frac{1}{ \sqrt{ (2\pi)^n \cdot |\boldsymbol{ \Sigma } | } } \cdot e^{-\frac 12 (\mathbf x - \boldsymbol{\mu})^{\mathrm T} \boldsymbol{\Sigma}^{-1}  (\mathbf x - \boldsymbol{\mu})}.

We first assume \boldsymbol{\mu} = \mathbf 0. If we can find some invertible matrix \mathbf B such that \mathbf B \mathbf X = \mathbf Z, then we can set \mathbf A := \mathbf B^{-1}. To that end, we note that \boldsymbol{\Sigma} is symmetric since

\displaystyle \boldsymbol{\Sigma}_{i,j} = \mathrm{Cov}(X_i, X_j) = \mathrm{Cov}(X_j, X_i) = \boldsymbol{\Sigma}_{j,i} > 0.

By the spectral theorems, \boldsymbol{\Sigma} is orthogonally diagonalisable. That is, there exists an orthogonal matrix \mathbf P (i.e. \mathbf P^{-1} = \mathbf P^{\mathrm T}) and a diagonal matrix \mathbf D such that

\mathbf{P}^{\mathrm T} \boldsymbol{\Sigma} \mathbf{P} = \mathbf{D}\quad \Rightarrow \quad \boldsymbol{\Sigma} = \mathbf{P} \mathbf{D} \mathbf{P}^{\mathrm T}.

Furthermore, the entries in the diagonal of \mathbf D are all positive, denoted \sigma_i^2 > 0. Define \boldsymbol{\Delta} by

\displaystyle \boldsymbol{\Delta}_{i,j} := \begin{cases} \sigma_i, & i = j, \\ 0, & i \neq j. \end{cases}

Then \boldsymbol{\Delta}\boldsymbol{\Delta}^{\mathrm T}  = \mathbf{D}, so that

\boldsymbol{\Sigma} = \mathbf{P} \mathbf{D} \mathbf{P}^{\mathrm T} = \mathbf{P} \boldsymbol{\Delta}\boldsymbol{\Delta}^{\mathrm T} \mathbf{P}^{\mathrm T} = (\mathbf{P} \boldsymbol{\Delta}) (\mathbf{P} \boldsymbol{\Delta})^{\mathrm T}.

Now define \mathbf B := ( \mathbf{P} \boldsymbol{\Delta})^{-1}. By Problem 3, \mathbf B \mathbf X is multivariate normal. We calculate its expectation via

\mathbb E[\mathbf B \mathbf X] = \mathbf B \mathbf E[\mathbf X] = \mathbf B \mathbf 0 = \mathbf 0

and its covariance matrix via

\begin{aligned}\boldsymbol{\Sigma_{\mathbf B \mathbf X}} &= \mathbb E[(\mathbf B \mathbf X)^{\mathrm T} (\mathbf B \mathbf X)] \\ &= \mathbf B \mathbb E[\mathbf X^{\mathrm T} \mathbf X] \mathbf B^{\mathrm T} \\ &= ( \mathbf{P} \boldsymbol{\Delta})^{-1} (\mathbf{P} \boldsymbol{\Delta}) (\mathbf{P} \boldsymbol{\Delta})^{\mathrm T} (( \mathbf{P} \boldsymbol{\Delta})^{-1})^{\mathrm T} = \mathbf I. \end{aligned}

Therefore, \mathbf B \mathbf X \sim \mathcal N(\mathbf 0, \mathbf I). Since \mathbf Z \sim \mathcal N(\mathbf 0, \mathbf I), we conclude \mathbf B \mathbf X = \mathbf Z, so that \mathbf X = \mathbf B^{-1}\mathbf Z = \mathbf A \mathbf Z, as required.

For the general case, \mathbf X - \boldsymbol{\mu} has expectation \mathbf 0, and so by the first result, there exists some invertible matrix \mathbf A such that

\mathbf X - \boldsymbol{\mu} = \mathbf A \mathbf Z \quad \Rightarrow \quad \mathbf X = \boldsymbol{\mu} + \mathbf A \mathbf Z,

as required.

Problem 5. Define W_1 := \bar X_n and for each i > 1, define W_i := X_i - \bar X_n. If X_1,\dots,X_n \sim \mathcal N(\mu, \sigma^2) are i.i.d., prove that \bar X_n and (W_2,\dots,W_n) are independent. Deduce that \bar X_n and S_n^2 defined by

\displaystyle S_n^2 := \frac 1{n-1}  \sum_{i=1}^n (X_i - \bar X_n)^2

are independent.

(Click for Solution)

Solution. For each i > 1,

\displaystyle W_i = \sum_{j=1}^n \underbrace{ \left( \mathbb I_{\{ i \}}(j) -\frac{ 1 }{n} \right) }_{\mathbf A_{i,j}} X_j  =: \sum_{j=1}^n \mathbf A_{i, j}X_j.

Furthermore, define \mathbf A_{1, j} = 1/n. Defining \mathbf W := (W_1,\dots,W_n)^{\mathrm T}, \mathbf W = \mathbf A \mathbf X. If X_1,\dots,X_n are i.i.d. normally distributed with variance \sigma^2, then by the converse of the first point in Problem 3 (which holds), \mathbf X is multivariate normal. By the second point in Problem 3, \mathbf W is multivariate normal.

We turn to evaluate its covariance matrix. Firstly, to shorten computations,

\begin{aligned} \mathrm{Cov}(X_i, \bar X_n) &= \frac 1n \cdot \sum_{k=1}^n \mathrm{Cov}(X_i, X_k) \\ &= \frac 1n \cdot \mathrm{Cov}(X_i, X_i) = \frac{\sigma^2}{n}. \end{aligned}

Furthermore,

\begin{aligned} \mathrm{Cov}(\bar X_n, \bar X_n) &= \frac 1n \cdot \sum_{k=1}^n \mathrm{Cov}(X_i, \bar X_n) \\ &= \frac 1n \cdot \sum_{k=1}^n \frac{\sigma^2}{n} = \frac 1n \cdot n \cdot \frac{\sigma^2}n = \frac{\sigma^2}n. \end{aligned}

Thus, for j > 1,

\begin{aligned} (\boldsymbol{\Sigma}_{\mathbf W})_{1,j} &= \mathrm{Cov}(W_1, W_j) \\ &= \mathrm{Cov}(\bar X_n, X_j - \bar X_n) \\ &= \mathrm{Cov}(X_j , \bar X_n) - \mathrm{Cov}(\bar X_n,\bar X_n) \\ &= \frac{\sigma^2}{n} - \frac{\sigma^2}{n} = 0.\end{aligned}

Therefore, using properties involving the exponential function,

f_{\mathbf W}(\mathbf w) = f_{W_1}(w_1) \cdot f_{W_2,\dots,W_n}(w_2,\dots,w_n).

Hence, W_1 and (W_2,\dots,W_n) are independent. For the second claim, we first note that

\displaystyle 0 = \sum_{i=1}^n X_i - n\bar X_n = \sum_{i=1}^n (X_i - \bar X_n) = X_1 - \bar X_n + \sum_{i=2}^n (X_i - \bar X_n),

so that X_1 - \bar X_n = -\sum_{i=2}^n (X_i - \bar X_n) = -\sum_{i=2}^n W_i. It follows that

\begin{aligned} S_n^2 &= \frac 1{n-1} \left( (X_1 - \bar X_n)^2 +  \sum_{i=2}^n (X_i - \bar X_n)^2 \right) \\ &= \frac 1{n-1} \left( (X_1 - \bar X_n)^2 +  \sum_{i=2}^n W_i^2 \right) \\ &= \frac 1{n-1} \left( \left( \sum_{i=2}^n W_i \right)^2 +  \sum_{i=2}^n W_i^2 \right). \end{aligned}

Since S_n^2 can be written purely in terms of (W_2,\dots, W_n), it is independent of W_1 = \bar X_n.

Problem 6. Using the definitions and notation in Problem 5, prove that for any n \in \mathbb N_{>1},

\displaystyle (n-1) \cdot S_n^2 = (n-2) \cdot S_{n-1}^2 + \frac{n-1}{n} \cdot (X_n - \bar X_{n-1})^2.

Deduce that there exist i.i.d. Z_1,\dots, Z_{n-1} \sim \mathcal N(0, 1) such that

\displaystyle \frac{(n-1) \cdot S_n^2}{\sigma^2} = \sum_{i=1}^{n-1} Z_i^2.

(Click for Solution)

Solution. We first observe that

\begin{aligned} \bar X_n - \bar X_{n-1} &= \frac{1}{n} \cdot \sum_{i=1}^{n-1} X_i + \frac{1}{n} \cdot X_n - \frac 1{n-1} \cdot \sum_{i=1}^{n-1} X_i \\ &= -\frac{1}{n(n-1)} \cdot \sum_{i=1}^{n-1} X_i + \frac{1}{n} \cdot X_n = \frac 1n \cdot ( X_n - \bar X_{n-1} ). \end{aligned}

Performing algebruh,

\begin{aligned} &(n-1) \cdot S_n^2 \\ &= \sum_{i=1}^n \left( X_i - \bar X_{n-1} - \frac 1n \cdot ( X_n - \bar X_{n-1})\right)^2 \\ &= \sum_{i=1}^n \left[ (X_i - \bar X_{n-1})^2 - 2 \cdot (X_i - \bar X_{n-1}) \cdot  \frac 1n \cdot ( X_n - \bar X_{n-1}) + \frac 1{n^2} \cdot ( X_n - \bar X_{n-1})^2 \right] \\ &= (n-2) \cdot S_{n-1}^2 + (X_n - \bar X_{n-1})^2 - \frac{2}{n}  \cdot ( X_n - \bar X_{n-1})^2 + \frac 1{n} \cdot ( X_n - \bar X_{n-1})^2 \\ &= (n-2) \cdot S_{n-1}^2 + \frac{n-1}{n} \cdot (X_n - \bar X_{n-1})^2. \end{aligned}

We now prove the final equation by induction. In the case n = 2, we have

\begin{aligned} \frac{S_2^2}{\sigma^2} &= \frac 1{\sigma^2} \cdot \frac 12 \cdot ((X_1 -\bar X_1)^2 + (X_2 -\bar X_1)^2) = \frac{(X_2 - X_1)^2}{2\sigma^2}. \end{aligned}

Then Z_1 := (X_2 - X_1)/(\sigma \sqrt 2) \sim \mathcal N(0, 1), so that S_2^2/\sigma^2 = Z_1^2, as required. For the induction step, suppose that the statement holds for n = k. Then

\displaystyle k \cdot S_{k+1}^2 = (k-1) \cdot S_k^2 + \frac{k}{k+1} \cdot (X_{k+1} - \bar X_k)^2.

By relabelling the result in Problem 5, we have that S_k^2 is independent of \bar X_k and X_{k+1}. By the induction hypothesis, there exist i.i.d. Z_1,\dots, Z_{k-1} \sim \mathcal N(0, 1) such that

\displaystyle \frac{ (k-1) \cdot S_k^2 }{\sigma^2} := \sum_{i=1}^{k-1} Z_i^2.

Furthermore, X_{k+1} - \bar X_k \sim \mathcal N(0, \sigma^2 \cdot (k+1)/k) so that

\displaystyle Z_{k+1} := \frac{\sqrt{k}}{\sqrt{k+1}} \cdot \frac{ X_{k+1} - \bar X_k }{\sigma} = \frac{X_{k+1} - \bar X_k}{ \sigma \cdot \sqrt{ \frac{k+1}{k} } } \sim \mathcal N(0, 1).

Therefore,

\displaystyle  \displaystyle \frac{ k \cdot S_{k+1}^2 }{ \sigma^2 } = \frac{ (k-1) \cdot S_{k}^2 }{ \sigma^2 } + \frac{k}{k+1} \cdot \frac{ (X_{k+1} - \bar X_{k})^2 }{ \sigma^2 } = \sum_{i=1}^{k-1} Z_i^2 + Z_k^2 = \sum_{i=1}^{k} Z_i^2,

as required.

—Joel Kindiak, 25 Jul 25, 2018H

,

Published by


Response

  1. A Student’s Nightmare – KindiakMath

    […] Proof. See Exercise 6 on multivariate normal distributions. […]

    Like

Leave a reply to A Student’s Nightmare – KindiakMath Cancel reply