Logistic Regression
The linear regression will help us to find the correlation of continous function, while the logistic regression tries to adress the classification. The boundary in the classification does matter, an intuitive approach is to make the regression saturated quickly away from boundary, see the logistic function as below:
The basic idea of the logistic regression is the hypotheis will use the linear approximation, then mapped with logistic function for binary prediction, thus:
%pylab inline
Populating the interactive namespace from numpy and matplotlib
import numpy as np
import matplotlib.pyplot as plt
X = np.linspace(-6, 6, 200)
Y = 1 / (1 + np.exp(-X))
plt.figure()
plt.plot(X, Y)
plt.grid(True)
plt.show()
The cost function is mere a quatitative to measure the difference of the hypotheis with experiments. The logistic cost function is defined as:
Thus:
And we know
Thus
Let’s use the exercise #2 as an example, first visualize the data:
def render_exams(data, admitted, rejected):
plt.figure(figsize=(6, 6))
plt.scatter(np.extract(admitted, data['ex1']),
np.extract(admitted, data['ex2']),
c='b', marker='+', label='admitted')
plt.scatter(np.extract(rejected, data['ex1']),
np.extract(rejected, data['ex2']),
c='y', marker='o', label='rejected')
plt.xlabel('Exam 1 score');
plt.ylabel('Exam 2 score');
plt.axes().set_aspect('equal', 'datalim')
data = np.loadtxt('ex2data1.txt', delimiter=',', dtype={
'names': ('ex1', 'ex2', 'score'),
'formats': ('f4', 'f4', 'i4')},
)
admitted = data['score'] == 1
rejected = data['score'] == 0
render_exams(data, admitted, rejected)
plt.legend();
We will leverage the existing wheel, aka scikit-learn’s OneVsRestClassifier and the LogisticRegression.
from sklearn.multiclass import OneVsRestClassifier
from sklearn.linear_model import LogisticRegression
X = np.column_stack((data['ex1'], data['ex2']))
Y = data['score']
classifier = OneVsRestClassifier(LogisticRegression(penalty='l1')).fit(X, Y)
print 'Coefficents: ', classifier.coef_
print 'Intercept" ', classifier.intercept_
coef = classifier.coef_
intercept = classifier.intercept_
# see the coutour approach for a more general solution
ex1 = np.linspace(30, 100, 100)
ex2 = -(coef[:, 0] * ex1 + intercept[:, 0]) / coef[:,1]
render_exams(data, admitted, rejected)
plt.plot(ex1, ex2, color='r', label='decision boundary');
plt.legend();
Coefficents: [[ 0.09788619 0.09175555]] Intercept" [[-11.57316814]]
Predict the admission using the model:
print classifier.score(X, Y)
theta = np.concatenate((intercept[0], coef[0]), axis=0)
freq = 1 / (1 + np.exp(-1 * np.dot(theta, [1, 45, 85])))
print "For a student with scores 45 and 85, we predict an admission probability of %f" % freq
0.91 For a student with scores 45 and 85, we predict an admission probability of 0.652701
Regularization
The idea behind the regularization is the linear fitting does not work very well, high bias or underfitting; we might want some ingredients from high order polynomial for a better fit, but by no means expact the fitting will go extra mile to fit every single points, high variance or overfitting. There are two commonly-used regularization to cap the impact of the higher order polynomial:
- form:
- form:
See more regulization of the linear models in the wikipedia.
Luckily, the LogisticRegression in scikit-learn has integrated both and regularization, named penalty. More concretly, we will use Lazzo for the regularization, and Ridge regression for form.
Here is a more sophisticated example, the acceptance test for the microchips. We are going to demonstrate the impact of regularization: high variance and high bias.
def render_tests(data, accepted, rejected):
plt.figure(figsize=(6, 6))
plt.scatter(np.extract(accepted, data['test1']),
np.extract(accepted, data['test2']),
c='b', marker='+', label='accepted')
plt.scatter(np.extract(rejected, data['test1']),
np.extract(rejected, data['test2']),
c='y', marker='o', label='rejected')
plt.xlabel('Microchip Test 1');
plt.ylabel('Microchip Test 2');
plt.axes().set_aspect('equal', 'datalim')
data = np.loadtxt('ex2data2.txt', delimiter=',', dtype={
'names': ('test1', 'test2', 'score'),
'formats': ('f4', 'f4', 'i4')},
)
accepted = data['score'] == 1
rejected = data['score'] == 0
render_tests(data, accepted, rejected)
plt.legend();
It is clear that we need higher-order function for the input. First define the map_features
to map the features to higher order polynomial:
def map_features(f1, f2, order=1):
'''map the f1 and f2 to its higher order polynomial'''
assert order >= 1
def iter():
for i in range(1, order + 1):
for j in range(i + 1):
yield np.power(f1, i - j) * np.power(f2, j)
return np.vstack(iter())
We will use up to 6th order polynomial for the LogisticRegression with , aka the inverse regularization strength
out = map_features(data['test1'], data['test2'], order=6)
X = out.transpose()
Y = data['score']
classifier = OneVsRestClassifier(LogisticRegression(penalty='l1', C=1)).fit(X, Y)
print 'Coefficents: ', classifier.coef_
print 'Intercept: ', classifier.intercept_
print 'Accuracy: ', classifier.score(X, Y)
Coefficents: [[ 0.68658946 1.28037888 -4.86238904 -1.62177411 -2.34202973 0. 0. 0. 0. 0. 0. 0. 0. -2.36760082 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]] Intercept: [[ 1.86957363]] Accuracy: 0.796610169492
Visualizing the decision boundry is a little bit tricky: we will draw level 0 contour to get the decision boundary:
def draw_boundary(classifier):
dim = np.linspace(-1, 1.5, 1000)
dx, dy = np.meshgrid(dim, dim)
v = map_features(dx.flatten(), dy.flatten(), order=6)
z = (np.dot(classifier.coef_, v) + classifier.intercept_).reshape(1000, 1000)
CS = plt.contour(dx, dy, z, levels=[0], colors=['r'])
render_tests(data, accepted, rejected)
draw_boundary(classifier)
plt.legend();
Here is an example of overfitting:
overfitter = OneVsRestClassifier(LogisticRegression(penalty='l1', C=1000)).fit(X, Y)
render_tests(data, accepted, rejected)
draw_boundary(overfitter)
plt.legend();
Here is an example of underfitting, :
underfitter = OneVsRestClassifier(LogisticRegression(penalty='l2', C=0.01)).fit(X, Y)
render_tests(data, accepted, rejected)
draw_boundary(underfitter)
plt.legend();
Logistic Regression is just one of approaches to map the continous linear space to the classification, Support Vector Machine is probably more popular in the field.
Multi-class Classification
We can use OneVsOneClassifier for multi-class classification:
import scipy.io
data = scipy.io.loadmat('ex3data1.mat');
# pick random 100 handwriting
import random
indexes = random.sample(range(0, 5000), 100)
Render the selected handwritings:
figure = plt.figure(figsize=(10, 10))
for index, i in enumerate(indexes):
plt.subplot(10, 10, index + 1)
plt.axis('off')
plt.imshow(data['X'][i].reshape(20, 20).transpose(), cmap='Greys')
from sklearn.multiclass import OneVsOneClassifier
clf = OneVsOneClassifier(LogisticRegression(penalty='l2', C=0.01)
).fit(data['X'], data['y'].ravel())
Let’s print out the predicted digits:
predicted = clf.predict(data['X'][indexes])
for l, u in zip(range(0, 91, 10), range(10, 101, 10)):
print map(lambda x: 0 if x == 10 else x, predicted[l:u])
print "Predict score for the sample input: %f" % \
clf.score(data['X'][indexes], data['y'][indexes])
[7, 8, 2, 4, 1, 8, 5, 2, 0, 8] [8, 1, 7, 6, 7, 8, 8, 7, 7, 3] [8, 3, 4, 8, 3, 6, 1, 6, 9, 6] [1, 4, 1, 5, 7, 2, 5, 1, 9, 0] [0, 7, 8, 7, 2, 5, 0, 9, 7, 8] [1, 4, 8, 3, 2, 0, 3, 9, 3, 1] [5, 8, 6, 8, 6, 8, 4, 1, 3, 3] [6, 0, 6, 1, 8, 4, 0, 0, 1, 3] [8, 2, 4, 1, 3, 8, 9, 5, 5, 0] [5, 7, 2, 9, 5, 9, 6, 7, 9, 0] Predict score for the sample input: 0.880000