---
title: "Spervised Classification: an Exploration with R"
author: "Erwan Le Pennec and Ana Karina Fermin"
date: "Spring 2015"
output:
html_document:
keep_md: yes
---
```{r Knitr_Global_Options, include=FALSE}
library(knitr)
opts_chunk$set(warning = FALSE, message = FALSE, cache = TRUE, autodep = TRUE, tidy = FALSE)
```
We will use the twoClass dataset from _Applied Predictive Modeling_, the book of M. Kuhn and K. Johnson to illustrate the most classical supervised classification algorithms. We will use some _advanced_ __R__ packages: the __ggplot2__ package for the figures and the __caret__ package for the learning part. __caret__ that provides an unified interface to many other packages. We will also use __h2o__, a package dedicated to in memory learning of large data set, for its _deep_ learning algorithm.
```{r Libraries, cache = FALSE}
library("plyr")
library("dplyr")
library("ggplot2")
library("gridExtra")
library("caret")
library("h2o")
library("doMC")
registerDoMC(cores = 3)
```
# The twoClass dataset
We read first the dataset and use __ggplot2__ to display it.
```{r Read_and_Plot}
library(AppliedPredictiveModeling)
library(RColorBrewer)
data(twoClassData)
twoClass=cbind(as.data.frame(predictors),classes)
twoClassColor <- brewer.pal(3,'Set1')[1:2]
names(twoClassColor) <- c('Class1','Class2')
ggplot(data = twoClass,aes(x = PredictorA, y = PredictorB)) +
geom_point(aes(color = classes), size = 6, alpha = .5) +
scale_colour_manual(name = 'classes', values = twoClassColor) +
scale_x_continuous(expand = c(0,0)) +
scale_y_continuous(expand = c(0,0))
```
We create a few functions that will be useful to display our classifiers.
```{r Plot_Classifiers}
nbp <- 250;
PredA <- seq(min(twoClass$PredictorA), max(twoClass$PredictorA), length = nbp)
PredB <- seq(min(twoClass$PredictorB), max(twoClass$PredictorB), length = nbp)
Grid <- expand.grid(PredictorA = PredA, PredictorB = PredB)
PlotGrid <- function(pred,title) {
surf <- (ggplot(data = twoClass, aes(x = PredictorA, y = PredictorB,
color = classes)) +
geom_tile(data = cbind(Grid, classes = pred), aes(fill = classes)) +
scale_fill_manual(name = 'classes', values = twoClassColor) +
ggtitle("Decision region") + theme(legend.text = element_text(size = 10)) +
scale_colour_manual(name = 'classes', values = twoClassColor)) +
scale_x_continuous(expand = c(0,0)) +
scale_y_continuous(expand = c(0,0))
pts <- (ggplot(data = twoClass, aes(x = PredictorA, y = PredictorB,
color = classes)) +
geom_contour(data = cbind(Grid, classes = pred), aes(z = as.numeric(classes)),
color = "red", level = 1.5) +
geom_point(size = 4, alpha = .5) +
ggtitle("Decision boundary") +
theme(legend.text = element_text(size = 10)) +
scale_colour_manual(name = 'classes', values = twoClassColor)) +
scale_x_continuous(expand = c(0,0)) +
scale_y_continuous(expand = c(0,0))
grid.arrange(surf, pts, main = textGrob(title, gp = gpar(fontsize = 20)), ncol = 2)
}
```
As explained in the introduction, we will use __caret__ for the learning part. This package provides a unified interface to a huge number of classifier available in __R__. It is a very powerful tool when exploring the different models. In particular, it proposes to compute a _resampling_ accuracy estimate and gives the user the choice of the specific methodology. We will use a repeated V-fold strategy with $10$ folds and $2$ repetitions. We will reuse the same _seed_ for each model in order to be sure that the same folds are used.
```{r Caret_Control}
library("caret")
V <- 10
T <- 4
TrControl <- trainControl(method = "repeatedcv",
number = V,
repeats = T)
Seed <- 345
```
Finally, we provide a function that will store the accuracies for every resample and every model. We will also compute an accuracy based on the same data than the one used to learn in order to show the over-fitting phenomenon.
```{r Errs_Function}
ErrsCaret <- function(Model, Name) {
Errs <- data.frame(t(postResample(predict(Model, newdata = twoClass), twoClass[["classes"]])),
Resample = "None", model = Name)
rbind(Errs, data.frame(Model$resample, model = Name))
}
Errs <- data.frame()
```
We are now ready to define a function that take in input the current collection of accuracies, a name of a model, the corresponding formula and methods, as well as more parameters used to specify the model, and computes the trained model, displays its prediction in a figure and add the errors in the collection.
```{r Learn_And_Display}
CaretLearnAndDisplay <- function (Errs, Name, Formula, Method, ...) {
set.seed(Seed)
Model <- train(as.formula(Formula), data = twoClass, method = Method, trControl = TrControl, ...)
Pred <- predict(Model, newdata = Grid)
PlotGrid(Pred, Name)
Errs <- rbind(Errs, ErrsCaret(Model, Name))
}
```
We can apply this function to any model available in __caret__. We will pick a few models and sort them depending on the heuristic used to define them. We will distinguish models coming from a statistical point of view in which one try to estimate the conditional law and plug it into the Bayes classifier and from an optimization point of view in which one try to enforce a _small_ training error by minimizing a relaxed criterion.
## Statistical point of view
### Generative Modeling
In the generative modeling approach, one propose to estimate the joint law of the covariates and the label and to derive the conditional law using the Bayes formula. The generative models differ by the choice of the density estimator (often specified by an intra class model). We consider here the LDA and QDA methods, in which a Gaussian model is used, and two variants of the Naive Bayes method, in which all the features are assumed to be independent.
```{r LDA_and_QDA}
Errs <- CaretLearnAndDisplay(Errs, "Linear Discrimant Analysis", "classes ~ .", "lda");
Errs <- CaretLearnAndDisplay(Errs, "Quadratic Discrimant Analysis", "classes ~ . ", "qda");
```
```{r Classical_Naive_Bayes}
Errs <- CaretLearnAndDisplay(Errs, "Naive Bayes with Gaussian model", "classes ~ .", "nb",
tuneGrid = data.frame(usekernel = c(FALSE), fL = c(0)))
```
```{r Naive_Bayes_with_kernel_density_estimation}
Errs <- CaretLearnAndDisplay(Errs, "Naive Bayes with kernel density estimates", "classes ~ .", "nb",
tuneGrid = data.frame(usekernel = c(TRUE), fL = c(0)))
```
### Logistic regression
The most classical model is probably the logistic model, which is the canonical example of a parametric conditional regression estimate.
```{r Logistic}
Errs <- CaretLearnAndDisplay(Errs, "Logistic", "classes ~ .", "glm")
```
We are not restricted to the use of two features but may use any transformation of them. For instance, we may use all the possible monomials up to degree $2$.
```{r Quadratic_Logistic}
Errs <- CaretLearnAndDisplay(Errs, "Quadratic Logistic", "classes ~ PredictorA + PredictorB +
+ I(PredictorA^2) + I(PredictorB^2)
+ I(PredictorA*PredictorB)", "glm")
```
### Kernel Methods
In the nearest neighbor method, we need to supply a parameter: $k$ the number of neighbors used to define the kernel. We will compare visually the solution obtained with a few $k$ values.
```{r k-NN}
ErrsKNN <- data.frame()
KNNKS <- c(1, 5, 9, 13, 17, 21, 25, 29)
for (k in KNNKS) {
ErrsKNN <- CaretLearnAndDisplay(ErrsKNN, sprintf("k-NN with k=%i", k),
"classes ~ .","knn", tuneGrid = data.frame(k = c(k)))
}
Errs <- rbind(Errs, ErrsKNN)
```
*caret* provides an automatic way to select the best $k$, or any method parameter, using the chosen resampling scheme.
```{r k-NN_with_CV}
Errs <- CaretLearnAndDisplay(Errs, "k-NN with CV choice", "classes ~ .", "knn",
tuneGrid = data.frame(k = KNNKS))
```
We will look more closely at this choice at the end of this document.
## Optimization point of view
### SVM
Here is first the optimal hyperplan using the plain vanilla SVM.
```{r SVM}
Errs <- CaretLearnAndDisplay(Errs, "Support Vector Machine", "classes ~ .", "svmLinear")
```
If we use a polynomial kernel (with automatic degree selection), we can obtain a more complex classifier.
```{r Polynomial_kernel_SVM}
Errs <- CaretLearnAndDisplay(Errs, "Support Vector Machine with polynomial kernel",
"classes ~ .", "svmPoly")
```
We can also try a Gaussian kernel.
```{r Gaussian_kernel_SVM}
Errs <- CaretLearnAndDisplay(Errs, "Support Vector Machine with Gaussian kernel",
"classes ~ .", "svmRadial")
```
### NN
Neural networks are also present in __caret__.
```{r Neural_Network}
Errs <- CaretLearnAndDisplay(Errs, "Neural Network", "classes ~ .", "mlp")
```
A promising, but not adapted to this too small dataset, is the __h2o__ package which is an interface to the highly scalable __h2o__ in memory learning library.
```{r H2O_NN, results = 'hide'}
Name <- "H2O NN"
Localh2o <- h2o.init()
twoClass.h2o <- as.h2o(Localh2o, twoClass)
Model.h2o <- h2o.deeplearning(x = 1:2, y = 3,
training_frame = twoClass.h2o,
activation = "Tanh",
hidden = c(4,3),
loss = 'CrossEntropy')
Grid.h2o <- as.h2o(Localh2o, Grid)
Pred.h2o <- h2o.predict(Model.h2o, newdata = Grid.h2o)
Pred <- factor(as.matrix(Pred.h2o$predict),
levels = c('Class1','Class2'))
PlotGrid(Pred, Name)
Pred.h2o <- h2o.predict(Model.h2o, newdata = twoClass.h2o)
Pred <- factor(as.matrix(Pred.h2o$predict),
levels = c('Class1','Class2'))
Errs <- rbind(Errs, data.frame(t(postResample(Pred, twoClass[["classes"]])),
Resample = "None", model = Name))
set.seed(Seed)
Folds <- caret::createMultiFolds(twoClass[["classes"]], k = V, times = T)
for (t in 1:T) {
for (v in 1:V) {
twoClassTrain <- slice(twoClass, Folds[[(t-1)*V+v]])
twoClassTest <- slice(twoClass, -Folds[[(t-1)*V+v]])
twoClassTrain.h2o <- as.h2o(Localh2o, twoClassTrain)
Model.h2o <- h2o.deeplearning(x = 1:2, y = 3,
training_frame = twoClassTrain.h2o,
activation = "Tanh",
hidden = c(4,3),
loss = 'CrossEntropy')
twoClassTest.h2o <- as.h2o(Localh2o, twoClassTest)
Pred.h2o <- h2o.predict(Model.h2o, newdata = twoClassTest.h2o)
Pred <- factor(as.matrix(Pred.h2o$predict),
levels = c('Class1','Class2'))
Errs <- rbind(Errs, data.frame(t(postResample(Pred, twoClassTest[["classes"]])),
Resample = sprintf("Fold%d.Rep%d",v,t), model = Name))
}
}
```
## Trees
We conclude this model exploration by some tree base methods: plain tree, bagging, random forest, adaboost, C5.0...
```{r CART}
Errs <- CaretLearnAndDisplay(Errs, "CART", "classes ~ .", "rpart",
control = rpart.control(minsplit = 5, cp = 0), tuneGrid = data.frame(cp = .02))
```
```{r CART_display}
library(rpart.plot)
Tree <- train(classes ~ ., data = twoClass, method = "rpart", control = rpart.control(minsplit = 5, cp = 0),
tuneGrid = data.frame(cp = .02), trControl = TrControl)
Tree$finalModel
rpart.plot(Tree$finalModel)
prp(Tree$finalModel, type = 2, extra = 104, nn = TRUE, fallen.leaves = TRUE,
box.col = twoClassColor[Tree$finalModel$frame$yval])
```
```{r Bagging}
Errs <- CaretLearnAndDisplay(Errs, "Bagging", "classes ~ .", "treebag",
control = rpart.control(minsplit = 5))
```
```{r Random_Forest}
Errs <- CaretLearnAndDisplay(Errs, "Random Forest", "classes ~ .", "rf",
tuneLength = 1,
control = rpart.control(minsplit = 5))
```
```{r Boosting}
Errs <- CaretLearnAndDisplay(Errs, "AdaBoost", "classes ~ .", "ada",
control = rpart.control(minsplit = 5))
```
```{r C5.0}
Errs <- CaretLearnAndDisplay(Errs, "C5.0", "classes ~ .", "C5.0")
```
# Model comparison
We will now compare our model according to 4 criterion:
- __Accuracy__: the _naive_ empirical accuracy computed on the same dataset than the one used to learn the classifier,
- __AccuracyCV__: the mean of the accuracies obtained by the resampling scheme,
- __AccuracyInf__: a lowest bound on the mean accuracy obtained by substracting to the mean two times the standard deviation divided by the square root of the number of resamples.
- __AccuracyPAC__: a highly probably bound on the accuracy obtained by substracting the standard deviation.
## Global comparison
```{r Model_Comparison}
ErrCaretAccuracy <- function(Errs) {
Errs <- group_by(Errs, model)
cbind(dplyr::summarize(Errs, mAccuracy = mean(Accuracy, na.rm = TRUE), mKappa = mean(Kappa, na.rm = TRUE),
sdAccuracy = sd(Accuracy, na.rm = TRUE), sdKappa = sd(Kappa, na.rm = TRUE)))
}
ErrAndPlotErrs <- function(Errs) {
ErrCV <- ErrCaretAccuracy(dplyr::filter(Errs, !(Resample == "None")))
ErrCV <- transmute(ErrCV, AccuracyCV = mAccuracy, AccuracyCVInf = mAccuracy - 2 * sdAccuracy/sqrt(T * V),
AccuracyCVPAC = mAccuracy - sdAccuracy,
model = model)
ErrEmp <- dplyr::select(dplyr::filter(Errs, (Resample == "None")), Accuracy, model)
Err <- dplyr::left_join(ErrCV, ErrEmp)
print(ggplot(data = reshape2::melt(Err, "model"),
aes(x = model, y = value, color = variable)) +
geom_point(size = 5) +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
plot.margin = grid::unit(c(1, 1, .5, 1.5), "lines")))
Err
}
Err <- ErrAndPlotErrs(Errs)
```
```{r Model_Comparison_Best}
FindBestErr <- function(Err) {
for (name in names(Err)[!(names(Err) == "model")]) {
ind <- which.max(Err[, name])
writeLines(strwrap(paste("Best method according to", name, ":", Err[ind, "model"])))
}
}
FindBestErr(Err)
```
An interesting plot to assess the variability of the Cross Validation result is the following parallel plot in which an error is computed for each resample in addition to the mean.
```{r Model_ParallelPlot}
PlotParallels <- function(Errs) {
pl <- ggplot(data = dplyr::filter(Errs, Resample != "None"),
aes(x = model, y = Accuracy, color = Resample)) +
geom_line(aes(x = as.numeric(model))) +
scale_x_discrete(labels = unique(Errs$model)) +
scale_color_grey(guide = FALSE)
Err <- dplyr::summarize(group_by(dplyr::filter(Errs, Resample != "None"), model),
Accuracy = mean(Accuracy), Resample = "CV")
pl <- pl + geom_line(data = Err, aes(x = as.numeric(model)), size = 3) +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
plot.margin = grid::unit(c(1, 1, .5, 1.5), "lines"))
print(pl)
}
PlotParallels(Errs)
```
### K-NN Comparison
We focus here on the choice of the neighborhood in the $k$ Nearest Neighbor method.
```{r Model_Comparison_KNN}
ErrKNN <- ErrAndPlotErrs(ErrsKNN)
FindBestErr(ErrKNN)
```