简体   繁体   中英

How can I use a logistic regression based on both l1 and l2 regularizations?

Recently I copy my codes from R to Python and I do need some help about the codes. As far as I know, the logistic regression in sklearn only includes l1 or l2 regularized term, which represents the lasso and ridge regression, respectively. However, implementing both l1 and l2 regularized terms,ie, ElasticNet may be much better.

In the case of R, there is a notable package glmnet which can deploy the above ideas perfectly, whereas the package glmnet in python seems only support Linux system instead of windows 10 in my computer (please refer to this ) In addition, it will be much better if the package can visualize the result (such as shrinkage path)

#logtistic with penalty terms in sklearn
from sklearn.linear_model import LogisticRegression

LogisticRegression(C=0.1,random_state=seed,penalty='l1')
LogisticRegression(C=0.1,random_state=seed,penalty='l2')
sklearn.linear_model.SGDClassifier(loss='log', penalty='elasticnet')

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM