Wednesday, March 27, 2013

More on the Data Science competition in Kaggle

In my last post, I talked about the Data Science competition in Kaggle. In that post, I ran an optimized SVM model with a gaussian kernel. In this post, I'll go a little further into depth about the data and models.

I characterized the data as "well structured". I have already mentioned that the data is continuous with no missing values. I used a combination of numpy and pandas to look for missing values, check the mean and standard deviations of each feature, produce histograms to look for data skewing and outliers and a correlation matrix to see if there were any features that had strong linear correlations. These are not specific statistical tests. But this process gave me a good feel for the data and whether I needed any preprocessing.

Once I determined that I had a good data set, I proceeded to modeling. Since there are no categorical features, I decided not to run any kind of decision tree analysis. Since the response value is a classifier, I started with logistic regression and a linear SVM. Each of these gave a score of .797.

At this point, I decided to try a grid search. Here's the description from the user's guide: GridSearchCV implements a “fit” method and a “predict” method like any classifier except that the parameters of the classifier used to predict is optimized by cross-validation.

Here's the code:

param_grid={'C':[.01,.1,1.0,10.0,100.0],'gamma':[.1,.01,.001,.0001],'kernel':['linear','rbf']}
svr=svm.SVC()
grid=grid_search.GridSearchCV(svr,param_grid)
grid.fit(x_train,y_train)
print "The best classifier is:", grid.best_estimator_
print "The best score is ", grid.best_score_
print "The best parameters are ", grid.best_params_

And here's the results:

The best classifier is: SVC(C=10.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
  gamma=0.01, kernel=rbf, max_iter=-1, probability=False, shrinking=True,
  tol=0.001, verbose=False)
The best score is  0.898426323319
The best parameters are  {'kernel': 'rbf', 'C': 10.0, 'gamma': 0.01}

Ironically, I had already come up with this optimized model just plugging in values. This is not a quick process. I can't give you the exact amount of time that this takes because I just go off and do something else while it is running. Note that the score is not quite as high as my model in the last post. I'm guessing this is because I split the data and used 70% for training and 30% for testing. I believe this model uses a cross validation which means it only used 70% of the data for cross validation.

I also ran a nearest neighbor model. Here's the code and the results:

from sklearn.neighbors import KNeighborsClassifier
neigh=KNeighborsClassifier()
neigh.fit(x_train,y_train)
y_pred3=neigh.predict(x_test)
neigh_score=neigh.score(x_test,y_test)
print "The score from K neighbors is", neigh_score
cm3=confusion_matrix(y_test,y_pred3)
print "This is the confusion matrix with for K neighbors",(cm3)

The score from K neighbors is 0.883333333333
This is the confusion matrix with for K neighbors [[133  22]
 [ 13 132]]

The score for the K neighbors classifier is almost as high as the optimized SVM with the rbf kernel.

I'd be very interested to hear what others are finding as they analyze this set.

Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825-2830, 2011.

Tuesday, March 19, 2013

Kaggle Data Science competition

Kaggle.com is sponsoring another learning competition for machine learning. This one specifically mentions using scikit-sklearn in Python. See the competition details here.

It is amazing how much more is available in scikits just since I have been writing this blog. Recently, I have switched to using Python(x,y) which is a distribution which includes everything you need for machine learning. And it's specifically for Windows!! See the information on this distribution here. You do have to be careful about the plug in though. Specifically, the latest version of scikit-sklearn is .13.1. The version that downloads with Python(x,y) is .12. You'll have to update it. Don't ask me how. I took lots of wrong turns, finally figured it out but probably can't reproduce it.

The data set from Kaggle is well structured. There are 40 features and 999 training examples. The feature data is all continuous and there are no missing values. I was able to write code that gives me the SVM standard score on the leaderboard: .913.

Someday I'll have time to figure out how to use github and I'll post my code there. For now, here's what I have:

import csv as csv
import numpy as np
import pandas as pd
import scipy as sp
import matplotlib.pyplot as plt
# Reading in training data for Kaggle sci kit competition
csv_file_object=csv.reader(open('C:/Users/numbersmom/Dropbox/kaggle sci kit competition/train.csv'))
header=csv_file_object.next()
records=[]
for row in csv_file_object:records.append(row)
records=np.array(records)
records=records.astype(np.float)
csv_file_object=csv.reader(open('C:/Users/numbersmom/Dropbox/kaggle sci kit competition/train_label.csv'))
header=csv_file_object.next()
cl=[]
for row in csv_file_object:cl.append(row)
cl=np.array(cl)
cl=cl.astype(np.int8)
cl=cl.reshape(999,)
tr_ex=np.size(cl)

#Need to use 70% of the data for training and 30% for testing
n_train=int(.7*tr_ex)
x_train,x_test=records[:n_train,:],records[n_train:,:]
y_train,y_test=cl[:n_train],cl[n_train:]

#SVM code

from sklearn import svm
from sklearn.svm import SVC
from sklearn.metrics import confusion_matrix
# I tried different models, but this one with c=10 and gamma=.01 gives
# gives the SVM benchmark score.
clf=svm.SVC(C=10.0,gamma=.01,kernel='rbf',probability=True)
clf.fit(x_train,y_train)
print clf.n_support_
y_pred1=clf.predict(x_test)
gau_score=clf.score(x_test,y_test)
print"This is the score for rbf model",gau_score
cm1=confusion_matrix(y_test,y_pred1)
print "This is the confusion matrix for rbf model",(cm1)
print "finished"

The confusion matrix looks like this: 

          pred 0         pred 1
act0    141             14
act1     12             133

There's lots of other stuff I can try to get that number higher. You can check out the helpful users guide to get more information.