简体   繁体   中英

Sci-kit Classifying Thresholds

So I'm using scikit-learn to do some binary classification, and right now I'm trying the Logistic Regression classifier. After training the classifier, I print out the classification results and the probabilities they are in each class:

logreg = LogisticRegression()
logreg.fit(X_train,y_train)
print logreg.predict(X_test)
print logreg.predict_proba(X_test)

and so I get something like:

[-1 1 1 -1 1 -1...-1]
[[  8.64625237e-01   1.35374763e-01]
 [  3.57441028e-01   6.42558972e-01]
 [  1.67970096e-01   8.32029904e-01]
 [  9.20026249e-01   7.99737513e-02]
 [  1.20456011e-02   9.87954399e-01]
 [  6.48565595e-01   3.51434405e-01]...]

etc...and so it looks like whenever the probability exceeds 0.5, that's what the object is classified as. I'm looking for a way to adjust this number so that, for example, the probability to be in class 1 must exceed .7 to be classified as such. Is there a way to do this? I was looking at some parameters already like 'tol' and 'weight' but I wasn't sure if they were what I was looking for or if they were working...

You can set your THRESHOLD like this

THRESHOLD = 0.7
preds = np.where(logreg.predict_proba(X_test)[:,1] > THRESHOLD, 1, 0)

Please refer to sklearn LogisticRegression and changing the default threshold for classification

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM