简体   繁体   English

sklearn LogisticRegression 没有正则化

[英]sklearn LogisticRegression without regularization

Logistic regression class in sklearn comes with L1 and L2 regularization. sklearn 中的逻辑回归类带有 L1 和 L2 正则化。 How can I turn off regularization to get the "raw" logistic fit such as in glmfit in Matlab?如何关闭正则化以获得“原始”逻辑拟合,例如 Matlab 中的 glmfit? I think I can set C = large number but I don't think it is wise.我想我可以设置 C = 大数,但我不认为这是明智的。

see for more details the documentation http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression有关更多详细信息,请参阅文档http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression

Yes, choose as large a number as possible.是的,选择尽可能大的数字。 In regularization, the cost function includes a regularization expression, and keep in mind that the C parameter in sklearn regularization is the inverse of the regularization strength.在正则化中,代价函数包含一个正则化表达式,请记住,sklearn 正则化中的C参数是正则化强度的倒数。

C in this case is 1/lambda, subject to the condition that C > 0. C在这种情况下为1 /λ,受试者的条件是C > 0。

Therefore, when C approaches infinity, then lambda approaches 0. When this happens, then the cost function becomes your standard error function, since the regularization expression becomes, for all intents and purposes, 0.因此,当C接近无穷大时,则 lambda 接近 0。当发生这种情况时,成本函数将成为您的标准误差函数,因为从所有意图和目的来看,正则化表达式都变为 0。

Update: In sklearn versions 0.21 and higher, you can disable regularization by passing in penalty='none' .更新:在 sklearn 0.21 及更高版本中,您可以通过传入penalty='none'来禁用正则化。 Check out the documentation here.此处查看文档。

Go ahead and set C as large as you please.继续并根据需要将 C 设置为大。 Also, make sure to use l2 since l1 with that implementation can be painfully slow.此外,请确保使用 l2,因为带有该实现的 l1 可能会非常缓慢。

I got the same question and tried out the answer in addition to the other answers:我遇到了同样的问题,除了其他答案外,我还尝试了答案:

If set C to a large value does not work for you, also set penalty='l1' .如果将 C 设置为较大的值对您不起作用,请同时设置penalty='l1'

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 训练sklearn LogisticRegression分类器,没有所有可能的标签 - Training a sklearn LogisticRegression classifier without all possible labels Sklearn LogisticRegression方程说明 - Sklearn LogisticRegression equation clarification sklearn LogisticRegression python中的alpha - alpha in sklearn LogisticRegression python sklearn有错误(LogisticRegression模型选择) - There is an error with sklearn (LogisticRegression model selection) 具有 Logloss 和 L2 正则化的 SGD 分类器 使用 SGD 而不使用 sklearn python - SGD Classifier with Logloss and L2 regularization Using SGD without using sklearn python 使用 SGD 而不使用 sklearn 实现具有 Logloss 和 L2 正则化的 SGD 分类器 - Implement SGD Classifier with Logloss and L2 regularization Using SGD without using sklearn sklearn LogisticRegression.predict中丢失的概率 - probability missing in sklearn LogisticRegression.predict sklearn LogisticRegression的densify()和sparsify()方法的output是什么 - What is the output of densify() and sparsify() methods of sklearn LogisticRegression 从 sklearn 导入 LogisticRegression 时出现导入错误 - Import error while importing LogisticRegression from sklearn sklearn LogisticRegression - 绘图显示系数太小 - sklearn LogisticRegression - plot displays too small coefficient
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM