简体   繁体   中英

LAD with L2 norm in python? (sklearn)

I want to implement the LAD version of the linear_model.Ridge() in sklearn. Meaning the regularization is still done on the L2 norm but the model minimizes the sum of the absolute deviations not the squares of the errors. Meaning we're minimizing在此处输入图片说明

Is that possible?

如果您在 scikit learn 中使用SGDRegressor并指定epsilon_insensitive损失函数并将epsilon值设置为零,您将获得一个等效于 L2 正则化的 LAD 模型。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM