Billy Ian's Short Leisure-time Wander

into learning, investment, intelligence and beyond

[Notes on Mathematics for ESL] Chapter 3: Linear Regression Models and Least Squares

| Comments

3.2 Linear Regression Models and Least Squares

Derivation of Equation (3.8)

The least squares estimate of $\beta$ is given by the book’s Equation (3.6)

From the previous post, we know that $\mathrm{E}(\mathbf{y})=X\beta$. As a result, we obtain

Then, we get

The variance of $\hat \beta$ is computed as

If we assume that the entries of $\mathbf{y}$ are uncorrelated and all have the same variance of $\sigma^2$, then $\mathrm{Var}(\varepsilon)=\sigma^2I_N$ and the above equation becomes

This completes the derivation of Equation (3.8).

Thoughts on Equation (3.12) and (3.13)

There are a lot concepts of statistics in this part. It’s better to go through Chapter 6 and Chapter 10 in All of Statistics to have a taste about hypothesis tests and confidence intervals.

From my own viewpoint, Z-score and F-statistic give a measure about whether the corresponding features are useful or not. They can be used within some feature selection methods. However, they’re not very useful in practice. The perferred feature selection methods are discussed in Section 3.3 in the book.

Interpretations of Equation (3.20) and (3.22)

which completes the derivation of Equation (3.20).

Equation (3.22) shows that the expected quadratic error can be broken down into two parts as

The first error component $\sigma^2$ is unrelated to what model is used to describe our data. It cannot be reduced for it exists in the true data generation process. The second source of error corresponding to ther term $\text{MSE}(\tilde f(x_0))$ represents the error in the model and is under control of us. By Equation (3.20), the mean square error can be broken down into two terms: a model variance term and a model bias squared term. How to make these two terms as small as possible while considering the trade-offs between them is the central topic in the book.

Notes on Multiple Regression from Simple Univariate Regression

The first thing that comes to my mind when I read this section is that why we need this when we already have the ordinary least square (OLS) estimate of $\beta$:

It’s because we want to study how to obtain orthogonal inputs instead of correlated inputs, since orthogonal inputs have some nice properties.

Following Algorithm 3.1, we can transform the correlated inputs $\mathbf{x}$ to the orthogonal inputs $\mathbf{z}$. Another view is that we form an orthogonal basis by performing the Gram-Schmidt orthogonilization procedure on $X$’s column vectors and obtain an orthogonal basis $\mathbf{z}_{i=1}^p$. With this basis, linear regression can be done simply as in the univariate case as shown in Equation (3.28):

Following this equation, we can derive Equation (3.29):

We can write the Gram-Schmidt result in matrix form using the QR decomposition as

In this decomposition $Q$ is a $N\times(p+1)$ matrix with orthonormal columns and $R$ is a $(p+1)\times(p+1)$ upper triangular matrix. In this representation, the OLS estimate for $\beta$ can be written as

which is Equation (3.32) in the book. Following this equation, the fitted value $\mathbf{\hat y}$ can be written as

which is Equation (3.33) in the book.

3.4 Shrinkage Methods

Notes on Ridge Regression

If we compute the singular value decomposition (SVD) of the $N\times p$ centered data matrix $X$ as

where $U$ is a $N \times p$ matrix with orthonormal columns that span the column space of $X$, $V$ is a $p \times p$ orthogonal matrix, and $D$ is a $p \times p$ diagonal matrix with elements $d_j$ ordered such that $d_1\ge d_2 \ge \dots \ge d_p \ge 0$. From this representation of $X$ we can derive a simple expression for $X^TX$:

which is the Equation (3.48) in the book. Using this expression, we can compute the least squares fitted values as

which is the Equation (3.46) in the book. Similarly, we can find solutions for ridge regression as

which is the Equation (3.47) in the book. Since we can estimate the sample variance by $X^TX/N$, the variance of $\mathbf{z}_1$ can be derived as follows:

which is the Equation (3.49) in the book. Note that $v_1$ is the first column of $V$ and $V$ is orthogonal, so that $V^Tv_1$ is $[1,0, \dots, 0]^T$.

Notes on degrees-of-freedom formula for LAR and Lasso

The degrees-of-freedom of the fitted vector $\mathbf{\hat y}=(\hat y_1, \dots, \hat y_N)$ is defined as

in the book. Also, it’s claimed that $\text{df}(\mathbf{\hat y})$ is $k$ for ordinary least squares regression and $\text{tr}(\mathbf{S}_{\lambda})$ for ridge regresssion without proof in the book. Here, we’ll derive these two expressions. First, we define $e_i$ as a $N$-element vector of all zeros with a one in the $i$th spot. It’s easy to see that $\hat y_i=e_i^T\mathbf{\hat y}$ and $y_i=e_i^T\mathbf{y}$, so that

For OLS regression, we have $\mathbf{\hat y}=X(X^TX)^{-1}X^T\mathbf{y}$, so the above expression for $\text{Cov}(\mathbf{\hat y}, \mathbf{y})$ becomes

Thus,

where $x_i=X^Te_i$ is the $i$th row of $X$ or $i$th sample’s feature vector. According to the given formula, we get

If you’re not familar the basic properties of trace, you can refer to this page. Note that

Thus, when there are $k$ predictors we get

the claimed result for OLS in the book. Similarly for ridge regression,

which is the Equation (3.50) in the book.

References

Comments