-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathLecture 10 Lab.Rmd
126 lines (84 loc) · 5.03 KB
/
Lecture 10 Lab.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
title: "Lab 10"
author: "Nadir Nibras"
date: "March 21, 2019"
output: html_document
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
Clear, close all
```{r}
# clear workspace variables
rm(list = ls());
# clear window (same as ctrl+L. )
cat("\014")
# close all plots
graphics.off()
```
# Lab 10
## 6.7 Lab 3: PCR and PLS Regression
## 6.7.1 Principal Components Regression
Principal components regression (PCR) can be performed using the pcr() function, which is part of the pls library.We now apply PCR to the Hitters data, in order to predict Salary. Again, ensure that the missing values have been removed from the data, as described in Section 6.5.
Loading old variables/datasets used in previous labs
```{r}
library(ISLR)
Hitters =na.omit(Hitters )
x=model.matrix (Salary~.,Hitters )[,-1]
y=Hitters$Salary
set.seed(1)
# library (pls)
train=sample (1: nrow(x), nrow(x)/2)
test=(- train )
y.test=y[test]
```
```{r}
set.seed(2)
pcr.fit=pcr(Salary~., data=Hitters ,scale=TRUE ,validation ="CV")
```
The syntax for the pcr() function is similar to that for lm(), with a few additional options. Setting scale=TRUE has the effect of standardizing each predictor, using (6.6), prior to generating the principal components, so that the scale on which each variable is measured will not have an effect. Setting validation="CV" causes pcr() to compute the ten-fold cross-validation error for each possible value ofM, the number of principal components used. The resulting fit can be examined using summary().
```{r}
summary (pcr.fit)
```
Note that the CV values (cross validation errrors) reported is the RMSE and not the MSE
One can also plot the cross-validation scores using the validationplot() function. Using val.type="MSEP" will cause the cross-validation MSE to be plotted.
```{r}
validationplot(pcr.fit ,val.type="MSEP")
```
We see that the smallest cross-validation error (347.6) occurs when M = 16 components are used. This is barely fewer than M = 19, which amounts to simply performing least squares, because when all of the components are used in PCR no dimension reduction occurs. However, from the plot we also see that the cross-validation error is roughly the same when only one component is included in the model.
This suggests that a model that uses just a small number of components might suffice. The summary() function also provides the percentage of variance explained in the predictors and in the response using different numbers of components. This concept is discussed in greater detail in Chapter 10. Briefly, we can think of this as the amount of information about the predictors or the response that is captured using M principal components. For example, setting M = 1 only captures 38.31% of all the variance, or information, in the predictors. In contrast, using M = 6 increases the value to 88.63%. If we were to use all M = p = 19 components, this would increase to 100%.
We now perform PCR on the training data and evaluate its test set performance.
```{r}
set.seed (1)
pcr.fit=pcr(Salary~., data=Hitters ,subset =train ,scale =TRUE ,validation ="CV")
validationplot(pcr.fit ,val.type= "MSEP")
```
Now we find that the lowest cross-validation error occurs when M = 7 component are used. We compute the test MSE as follows.
```{r}
pcr.pred=predict (pcr.fit ,x[test ,], ncomp =7)
mean((pcr.pred -y.test)^2)
```
This test set MSE is competitive with the results obtained using ridge regression and the lasso. However, as a result of the way PCR is implemented, the final model is more difficult to interpret because it does not perform any kind of variable selection or even directly produce coefficient estimates. Finally, we fit PCR on the full data set, using M = 7, the number of components identified by cross-validation.
```{r}
pcr.fit=pcr(y???x,scale =TRUE ,ncomp =7)
summary (pcr.fit )
```
## 6.7.2 Partial Least Squares
We implement partial least squares (PLS) using the plsr() function, also in the pls library. The syntax is just like that of the pcr() function
```{r}
set.seed (1)
pls.fit=plsr(Salary~., data=Hitters ,subset =train ,scale=TRUE ,validation ="CV")
summary (pls.fit )
```
The lowest cross-validation error occurs when only M = 2 partial least squares directions are used. We now evaluate the corresponding test set MSE.
```{r}
pls.pred=predict (pls.fit ,x[test ,], ncomp =2)
mean((pls.pred -y.test)^2)
```
The test MSE is comparable to, but slightly higher than, the test MSE obtained using ridge regression, the lasso, and PCR.
Finally, we perform PLS using the full data set, using M = 2, the number of components identified by cross-validation.
```{r}
pls.fit=plsr(Salary???., data=Hitters ,scale=TRUE ,ncomp =2)
summary (pls.fit )
```
Notice that the percentage of variance in Salary that the two-component PLS fit explains, 46.40%, is almost as much as that explained using the final seven-component model PCR fit, 46.69 %. This is because PCR only attempts to maximize the amount of variance explained in the predictors, while PLS searches for directions that explain variance in both the predictors and the response.