1.Machine Learning
Link to Jupyter notebook: 1.machine-learning-basics.ipynb
Introduction
Machine learning studies how to learn patterns from data and predict characteristics of unknown data.
According whether the predicted variable is known, machine learning generally fall into two categories: supervised learning and unsupervised learning.
In supervised learning, the model takes features and class labels or targer values as input to build the model. If the target variable (the variable to predict) is a categorical (e.g. positive/negative), the problem is called classification. If the target variable is continous (e.g. height), the problem is called regression. Most supervised learning problems fall into these two categories, however, combination of continous output and categorical output or structured output are also possible.
In unsupervised learning, the target variables are not specified. The objective is to identify internal structures (clusters) of the data. After model fitting, we can assign new samples to clusters or generate samples with similar distribution as the original data. Unsupervised learning are also useful as a data preprocessing step prior to supervised learning.
Import data
Datasets for machine learning can be loaded from a variety of souces. Tabular data can be loaded through the pandas package in various formats:
Format Type | Data Description | Reader | Writer |
---|---|---|---|
text | CSV | ||
text | JSON | ||
text | HTML | ||
text | Local clipboard | ||
binary | MS Excel | ||
binary | HDF5 Format |
You can refer to Pandas IO Tools for more usage of data importing using pandas.
For large datasets, it is recommended to use binary formats such as HDF5 and NPZ for more efficient reading and writing and also reducing disk usage.
HDF5 format can be read to or write from numpy arrays conveniently using the h5py package:
NPZ format is native format for numpy. NPZ/NPY format can be read from file using numpy.load and write to file using numpy.save or numpy.savez.
Import required Python packages
Documentation for required Python packages:
numpy: arrays
pandas: data IO, DataFrame
imbalanced-learn: deal with class imbalance
scikit-learn: machine learning
statsmodels: statistical functions
matplotlib: plotting
seaborn: high-level plotting based on matplotlib
jupyter: Python notebook
mlxtend: Extension of scikit-learn
graphviz: Python binding for Graphviz graph drawing software
wand: ImageMagick (image processing tool) binding for Python
For Jupyter Notebook users, run the following magic command to display images inline.
If you run Python/IPython interactively or in a script, please run the following code instead.
Initialize random seed
We fix the random seed of numpy in this tutorial to make the results reproducible.
Generate datasets
You can start with simple datasets that is easy to understand and visualize before handling realistic datasets. scikit-learn provides many functions (sklearn.datasets) for generating datasets easily.
Classification dataset
For example, sklearn.datasets.make_classification generates samples from a mixture of Gaussian distributions with parameters to specify the number of classes, number of features, number of classes, etc. The following example generate a two-class classification dataset of 1000 samples with 2 features for visualization. Samples are generated from two independent 2D Gaussian distributions. This dataset is suitable for linear classifier.
Regression dataset
You can also use make_regression to generate a simple regression dataset. The following dataset consists of 1000 samples with 1 feature and 1 response variable. A Gaussian noise 10 is added to each response variable.
Specialized datasets
scikit-learn also provides sample generators for specialized classification/regression/clustering problems, e.g. make_circles, make_moons, make_gaussian_quantiles. These datasets can be used to demonstrate cases where simple classifier or clustering algorithms don't work but non-linear and more complicated algorithms work better.
The digits dataset
scikit-learn also includes some commonly used public datasets that is useful for exploring machine learning algorithms in the package. For example, the digits dataset is a small handwriting image dataset of 10 digits.
Dataset used in this tutorial
We use sklearn.datasets.make_classification to generate a dataset with 2 features
Single feature analysis
Analyze the separability of classes using individual features
Plot the distribution of feature values of each feature. A good feature should separate the two class well. The following plot shows that each individual feature can largely separate the two classes, though not perfectly.
Feature correlation analysis
Sometimes highly correlated features may be detrimental to model performance and feature selection. A redundant feature does not provide more information, but introduces extra parameters to the model to make the model prone to overfitting. For feature selection, the model may assign a small weight to each redundant features too many redundant features may dilute the contribution of individual features. Although the impact of redundant features on model performance depends on the machine learning algorithm used, it is a good practice to identify these features and remove/merge redundant features.
PCA analysis
A dataset with more than 3 features cannot be visualized directly. We can use dimension reduction to embed the data on a 2D or 3D space. A dimension reduction algorithm maps data points in high dimension to low dimension while preserve distance in their original space as well as possible.
Principal Component Analysis (PCA) is the most common algorithm for dimension reduction. It maps data to a new space by linear combination of original features such that new features are linearly independent and the total variance is maximized.
If samples can be separated well in a PCA plot, a linear classifier also works well. Otherwise, a non-linear classifier may improve classification performance.
Data scaling
For most machine learning algorithms, it is recommended to scale the features to a common small scale. Features of large or small scale increase the risk of numerical instability and also make the loss function harder to optimize. Feature selection based on fitted coefficients of a linear model assumes that the input features are in the same scale. Performance and convergence speed of gradient-based algorithms such as neural networks are largely degraded if the data is not properly scaled. Decision tree and random forest, however, are less sensitive to data scale because they use rule-based criteria.
Common data scaling methods include standard/z-score scaling, min-max scaling, robust scaling and abs-max scaling.
Standard/z-score scaling first shift features to their centers(mean) and then divide by their standard deviation. This method is suitable for most continous features of approximately Gaussian distribution.
Min-max scaling method scales data into range [0, 1]. This method is suitable for data concentrated within a range and preserves zero values for sparse data. Min-max scaling is also sensitive to outliers in the data. Try removing outliers or clip data into a range before scaling.
\text{min_max}(x_{ij}^{'}) = \frac{x_{ij} - \text{min}_k \mathbf{x}_{ik}} {\text{max}_k x_{ik} - \text{min}_k x_{ik}}
Max-abs scaling method is similar to min-max scaling, but scales data into range [-1, 1]. It does not shift/center the data and thus preserves signs (positive/negative) of features. Like min-max, max-abs is sensitive to outliers.
\text{max_abs}(x_{ij}^{'}) = \frac{x_{ij}}{\text{max}_k \vert x_{ik} \vert}
Robust scaling method use robust statistics (median, interquartile range) instead of mean and standard deviation. Median and IQR are less sensitive to outliers. For features with large numbers of outliers or largely deviates from normal distribution, robust scaling is recommended.
\text{robust_scale}(x_{ij}^{'}) = \frac{x_{ij} - \text{median}_k x_{ik}} {Q_{0.75}(\mathbf{x}_i) - Q_{0.25}(\mathbf{x}_i)}
Split data into training and test set
We should split the dataset into a training and test set to evaluate model performance. During model training, the model overfits to the data to some extent, and so model performance on the training set is generally biases and higher than on the test set. The overfitting issue can be resolved by adding more independent samples to the dataset. The difference of training and test performance decreases with the increase of sample size.
Here, we use train_test_split to randomly set 80% of the samples as training set and 20% as test set.
Train the model
During model training, the parameters of the model is adjusted to minimize a loss function.
Logistic regression
Logistic regression is a linear model for classification. It first forms linear combination of input features and then map the combined value to class probability between 0 and 1 through a non-linear sigmoid function. During model training, the weights of the model are adjusted such that the cross-entropy between model prediction and true labels is minimized.
Model inspection
Feature importance
For linear models (e.g. Logistic regression, linear regression, linear SVM), feature importance is usually defined as the square of coefficients:
Decision boundary
We can inspect decision boundaries of a model by predict class labels on a 2D grid of sample points. You can see that the decision boundary of Logistic regression is a straight line while other classifiers create non-linear and irregular decision boundaries.
Evaluate the model
Predict labels on the test dataset
To evaluate performance of the model, we use the predict method of the estimator to predict class labels of test data. This will return an integer array indicating class labels.
Confusion matrix
The most common way to evaluate classification performance is to construct a confusion matrix.
A confusion matrix summarizes the number of correctly or wrongly predicted samples and is usually made up of four entries:
Predicted | Negative | Positive |
---|---|---|
True | ||
Negative | True Negative (TN) | False Negative (FN) |
Positive | False Positive (FP) | True Positive (TP) |
Predicted | Negative | Positive |
---|---|---|
True | ||
Negative | 81 | 8 |
Positive | 27 | 84 |
Evaluation metrics for classification
A variety of metrics can be calculate from entries in the confusion matrix.
Accuracy (0 ~ 1) summarizes both positive and negative predictions, but is biased if the classes are imbalanced:
Recall/sensitivity (0 ~ 1) summarizes how well the model finds out positive samples:
Precision/positive predictive value (0 ~ 1) summarizes how well the model finds out negative samples:
F1 score (0 ~ 1) balances between positive predictive value (PPV) and true positive rate (TPR) and is more suitable for imbalanced dataset:
Matthews correlation coefficient (MCC) (-1 ~ 1) is another metric that balances between recall and precision:
Predict class probability
Many classifiers first predict a continous value for each sample indicating confidence/probability of the prediction and then choose a fixed cutoff (e.g. 0.5 for probability values) to convert the continous values to binary labels. We can get the raw prediction values through the predict_proba method.
ROC curve and precision-recall curve
Sometimes a single fixed cutoff is insufficient to evaluate model performance. Receiver Operating Characterisic (ROC) curve and Precision-Recall curve are useful tools to inspect the model performance with different cutoffs. ROC curve and precision-recall curve are also less sensitive to class imbalance. Compared to ROC curve, precision-recall curve are more suitable for extremely imbalanced datasets.
The area under the ROC curve (AUROC) or average precision (AP) is a single value that summarizes average model performance under different cutoffs and are very commonly used to report classification performance.
Cross-validation
For very large datasets, a single split of the dataset into a training set and a test set is sufficient to evaluate the model performance. However, for small dataset, the test samples represent only a small proportion of samples in future predictions. The model performance evaluated on the test samples varies greatly between resamplings of the dataset.
K-fold cross-validation
Cross-validation is a commonly used technique for model evaluation on small dataset. In k-fold cross-validation, the dataset is evenly divided into k partitions(folds). In each round of validation, the model is tested on one parition and trained on remaining (k-1)/k partitions. K-fold cross-validation ensures that there is no overlap between training and test samples but can have overlaps between rounds. Each sample is set as test sample for exactly once. Finally, the average performance is calculated across k rounds.
scikit-learn provides [many functions for splitting datasets] (http://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection).
Here, we use KFold to create 10-fold cross-validation datasets. 5 and 10 are commonly used values for k. Use 10-fold cross-validation if the sample size and computation burden permits.
The following code illustrates how KFold splits the dataset. Black boxes indicates test samples in each round.
Then we train the model on each training set and predict labels and scores on the whole dataset.
Collect evaluation metrics
Next, we evaluates the model using K-fold cross-validation.
accuracy | recall | precision | f1 | mcc | roc_auc | average_precision | dataset | |
---|---|---|---|---|---|---|---|---|
0 | 0.832222 | 0.797386 | 0.863208 | 0.828992 | 0.666847 | 0.908640 | 0.931433 | train |
1 | 0.810000 | 0.809524 | 0.755556 | 0.781609 | 0.614965 | 0.882184 | 0.844361 | test |
2 | 0.818889 | 0.781737 | 0.843750 | 0.811561 | 0.639439 | 0.898143 | 0.915890 | train |
3 | 0.900000 | 0.961538 | 0.862069 | 0.909091 | 0.804601 | 0.985176 | 0.988980 | test |
4 | 0.828889 | 0.796909 | 0.853428 | 0.824201 | 0.659380 | 0.905196 | 0.924640 | train |
Summarize evaluate metrics
Take average of model performance across cross-validation runs:
accuracy | recall | precision | f1 | mcc | roc_auc | average_precision | |
---|---|---|---|---|---|---|---|
dataset | |||||||
test | 0.831000 | 0.794919 | 0.861380 | 0.823425 | 0.667898 | 0.903903 | 0.921943 |
train | 0.833778 | 0.795302 | 0.862274 | 0.827428 | 0.669625 | 0.906041 | 0.924631 |
ROC and PR curves
For each cross-validation run, compute an ROC/PR curve. Then plot the mean and confidence intervals across cross-validation runs.
Homework
Understand and run all code in this tutorial using Jupyter. You can generate different types of dataset or use a real dataset.
Try different classifiers (SVC, random forest, logistic regression, KNN) and compare model performance.
Try different K's in K-fold cross-validation and compare mean and variance of model performance.
Try different class ratios and compare model performance.
Further reading
Books
Trevor Hastie, Robert Tibshirani, Jerome Friedman. (2009). The Elements of Statistical Learning.
Christopher Bishop. (2006). Pattern Recognition and Machine Learning.
Kevin P. Murphy. (2012). Machine Learning A Probabilisitic Perspective.
Sergios Theodoridis. (2009). Pattern Recognition.
Class imbalance
He, H., and Garcia, E.A. (2009). Learning from Imbalanced Data. IEEE Transactions on Knowledge and Data Engineering 21, 1263–1284.
Batista, G.E.A.P.A., Prati, R.C., and Monard, M.C. (2004). A Study of the Behavior of Several Methods for Balancing Machine Learning Training Data. SIGKDD Explor. Newsl. 6, 20–29.
Chawla, N.V., Bowyer, K.W., Hall, L.O., and Kegelmeyer, W.P. (2002). SMOTE: Synthetic Minority Over-sampling Technique. J. Artif. Int. Res. 16, 321–357.
Machine learning in R
The caret package (a tutorial in GitBook): http://topepo.github.io/caret
Last updated