简体   繁体   中英

Prevent overfitting in Logistic Regression using Sci-Kit Learn

I trained a model using Logistic Regression to predict whether a name field and description field belong to a profile of a male, female, or brand. My train accuracy is around 99% while my test accuracy is around 83%. I have tried implementing regularization by tuning the C parameter but the improvements were barely noticed. I have around 5,000 examples in my training set. Is this an instance where I just need more data or is there something else I can do in Sci-Kit Learn to get my test accuracy higher?

overfitting is a multifaceted problem. It could be your train/test/validate split (anything from 50/40/10 to 90/9/1 could change things). You might need to shuffle your input. Try an ensemble method, or reduce the number of features. you might have outliers throwing things off

then again, it could be none of these, or all of these, or some combination of these.

for starters, try to plot out test score as a function of test split size, and see what you get

#The 'C' value in Logistic Regresion works very similar as the Support 
#Vector Machine (SVM) algorithm, when I use SVM I like to use #Gridsearch 
#to find the best posible fit values for 'C' and 'gamma',
#maybe this can give you some light:

# For SVC You can remove the gamma and kernel keys 
# param_grid = {'C': [0.1,1, 10, 100, 1000], 
#                'gamma': [1,0.1,0.01,0.001,0.0001], 
#                'kernel': ['rbf']} 

param_grid = {'C': [0.1,1, 10, 100, 1000]} 

from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report,confusion_matrix

# Train and fit your model to see initial values
X_train, X_test, y_train, y_test = train_test_split(df_feat, np.ravel(df_target), test_size=0.30, random_state=101)
model = SVC()
model.fit(X_train,y_train)
predictions = model.predict(X_test)
print(confusion_matrix(y_test,predictions))
print(classification_report(y_test,predictions))

# Find the best 'C' value
grid = GridSearchCV(SVC(),param_grid,refit=True,verbose=3)
grid.best_params_
c_val = grid.best_estimator_.C

#Then you can re-run predictions on this grid object just like you would with a normal model.
grid_predictions = grid.predict(X_test)

# use the best 'C' value found by GridSearch and reload your LogisticRegression module
logmodel = LogisticRegression(C=c_val)
logmodel.fit(X_train,y_train)

print(confusion_matrix(y_test,grid_predictions))
print(classification_report(y_test,grid_predictions))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM